Vehicle position detection using picture

Hello i am wondering, if there is some module in node red how can I check the vehicle position using a picture. So the process will be, that I have a picture of a vehicle for example from parking lot and i need to check, if the vehicle is parked within a defined area (some rectangle area for example).
I just need to output some true/false value - if it is within a selected area or not. Thanks for help.

Hello @jttjtt,

I think that it is possible with a node, called tensorflow.
See: node-red-contrib-tensorflow (node) - Node-RED
Or any other equivalent.

An example picture below:

It can detect vehicles in the picture and with the bounding box, you can compare, if it is in a certain position. Just an idea.

The msg result, see below:


1 Like

Great, thanks for the idea. This is what i wanted. I will test it if it works like i need.

Also more information here Introduction - Node-RED AI Photo Booth Workshop

I woud like to ask - using the tensorflow module i was able to detect car in the picture. The output of bbox was:

0: 454.5329475402832
1: 397.44991421699524
2: 366.7844009399414
3: 491.7205274105072

But i am not sure, what are these coordinates? I have found, that 0 and 1 should be X and Y coordinates and 2 and 3 should be widht and height. But when I open the picture i get different X,Y coordinates of the yellow area... Do I have right information? Thanks.

The bbox numbers can mean lots of things, depending on how the node developer has implemented it. E.g. the coordinates of the center point of the bbox. And sometimes the x and y coordinates, or width and height are switched. It can also be that it are the coordinates of the resized image. Suppose you inject an image with resolution NxM but the tensorflow model expects AxB. Then the input image will be resized to AxB, so tensorflow will calculate bounding boxes for that resized image, not for your original image...

Hi, @jttjtt

As far as I'm aware for Node RED and the Tensorflow node the bbox array represent the x, y co-ordinates and the width and height as follows: [x,y,w,h], all floats, but as Bart says, that will also depend on the resolution of the picture.

But, if I'm wrong, please correct me.

Which different values do you get?


1 Like

I hope this is not for issuing parking fines :roll_eyes:

1 Like

@smcgann99 - no, this it not the purpose :slight_smile: We need to just check, if the car/truck is parked in the same area each time. So we have a rectangle area and I want to know, if the car is "inside" this area.

@FireWizard52 I made picture of the car in the same location four times and get these outputs:

0: 457.0689010620117
1: 397.5502395629883
2: 371.5623092651367
3: 493.79176139831543

0: 454.5329475402832
1: 397.44991421699524
2: 366.7844009399414
3: 491.7205274105072

0: 452.34094619750977
1: 395.6010568141937
2: 377.09049224853516
3: 493.55998635292053

0: 460.2616882324219
1: 390.64797163009644
2: 365.2397918701172
3: 498.88250827789307

Then I put the car around 60cm back and got these outputs:

0: 493.72135162353516
1: 358.54965448379517
2: 354.7510528564453
3: 494.5740222930908

take 2
0: 494.813232421875
1: 359.90325808525085
2: 357.15396881103516
3: 497.2400629520416

So you just have to define allowed value intervals for each, then you create a "allowance zone" and as long as all values are within their respective allowed interval, the truck is parked correctly

EDIT: Like x is valid within 450 +/- 50 or so, y 390 +/- 50 etc and then fine tune if required

So the coordinates should be like this?

I think so, looks ok to me

Hello @jttjtt

I do not understand why you have 4 different bbox values. I tested it several times with my example from my first post and got always exactly the same value. So the tensorflow output is stable (as expected). It may only happen, if you use 4 different pictures as input and I believe you do. So you need some tolerance, as Walter already suggested. The value you have to sort out by trial and error, I think This defines your "allowance zone" or even multiple zones.

You may want to try to find the correct "allowance zone" by using this node: node-red-node-annotate-image (node) - Node-RED

You can insert possible values and on an empty picture, you can see how it looks like.
See the example below:

The upper "custom" node will give you some information about the image size, which is in my case 1600 x 800. See: node-red-contrib-image-info (node) - Node-RED

In the Function node you can configure the object, you want to annotate.
I have chosen a quite thick (red) line in order to make it clearer in this picture.

See the result below:

If you want to try it yourself, please find this test flow below:

[{"id":"8fdeb0421bcbac9d","type":"annotate-image","z":"39005939.ba6f4e","name":"Image annotated","fill":"","stroke":"#ffC000","lineWidth":5,"fontSize":24,"fontColor":"#ffC000","x":1910,"y":1560,"wires":[["8f1691d13d3e7b82","0c1dc3f405a22867"]]},{"id":"8f1691d13d3e7b82","type":"debug","z":"39005939.ba6f4e","name":"Msg 2","active":true,"tosidebar":true,"console":false,"tostatus":false,"complete":"true","targetType":"full","statusVal":"","statusType":"auto","x":2110,"y":1560,"wires":[]},{"id":"32be6079786d212e","type":"fileinject","z":"39005939.ba6f4e","name":"","x":1490,"y":1520,"wires":[["a1020fafe390e963","9c6d86b90cf53489"]]},{"id":"0c1dc3f405a22867","type":"image","z":"39005939.ba6f4e","name":"Image Preview","width":160,"data":"payload","dataType":"msg","thumbnail":false,"active":true,"pass":false,"outputs":0,"x":2140,"y":1620,"wires":[]},{"id":"a1020fafe390e963","type":"function","z":"39005939.ba6f4e","name":"function 1","func":"msg.annotations = [\n    {\n        type: \"rect\",\n        bbox: [300,400,1000,400],\n        stroke: \"red\",\n        lineWidth: 50\n    }\n]\nreturn msg;","outputs":1,"noerr":0,"initialize":"","finalize":"","libs":[],"x":1700,"y":1560,"wires":[["8fdeb0421bcbac9d"]]},{"id":"9c6d86b90cf53489","type":"image-info","z":"39005939.ba6f4e","name":"","x":1890,"y":1500,"wires":[["434311fc31b3fccd"]]},{"id":"434311fc31b3fccd","type":"debug","z":"39005939.ba6f4e","name":"Msg 1","active":true,"tosidebar":true,"console":false,"tostatus":false,"complete":"true","targetType":"full","statusVal":"","statusType":"auto","x":2110,"y":1500,"wires":[]}]


Thanks a lot. I Will try your solution. I got different outputs, because the input was always real picture from the onvif camera node. So there was still the same Car in the same position in the picture but the picture was taken everytime from the Onvif node. So it should be a bit different.

For me it is also different values when the image is from a live camera. I assume it could be small changes in light conditions or background that triggers the tensorflow slightly different. Like in the example below, I can see the box moves a bit around my car when images are coming in

Just some further playing for fun, checking that my car is still there :wink:

In the function node I'm setting up the bounderies fro my "allowed zone". Then on incoming messages I check that each corner of the bbox is within the zone. If so, car is parked correctly and still there

This is the simple code in my function node, most likely room for improvements and further add-on ideas

if (msg.payload[0] == undefined){

if (msg.payload[0]['class'] == 'car'){
    let x = [-5,10];
    let y = [25,75];
    let w = [275,325];
    let h = [250,350];
    let bbox = msg.payload[0]['bbox'];
    let cntr = 0
    if (bbox[0] > x[0] && bbox[0] < x[1] ){
        cntr += 1;
    if (bbox[1] > y[0] && bbox[1] < y[1]) {
        cntr += 1;
    if (bbox[2] > w[0] && bbox[2] < w[1]) {
        cntr += 1;
    if (bbox[3] > h[0] && bbox[3] < h[1]) {
        cntr += 1;
    if (cntr == 4){
        node.warn('inside zone');
    } else {
        node.warn('outside zone');