Object Detection using node-red-contrib-tfjs-coco-ssd

The problem is the very loose calculations for position of lines and rectangle for underneath text.

We don't have the luxury of a client side DOM with a plethora of elements and functions - at sever side, we have a bitmap of pixels and we have to calculate which pixels to set.

With a bit more work, we could easily determine form the rectangles coordinates in relation to the bitmap extent & place the string appropriately.

Edit...
On the lighter side, we (humans) can see it's a person haha.

(the annotation is really only for visual debugging - I think :slight_smile: ).

I just have to tell you I have finally got something also working for my NVIDIA Jetson Nano's (arm64)!!! So happy!

I raised an issue earlier regarding missing support for node-tfjs on arm64 platforms

Luckily for me, @yhwang kindly reached out and provided a solution


Maybe this is useful for RPi4 users as well?

Anyway, now I can run Node-RED on my Jetson Nano's with tfjs-coco-ssd detection built-in!

1 Like

Can you share this flow please?

Of course, right away...

I found out that some additional nodes also are needed. I installed them via palette manager before I imported the flow (image-tools you have already I think :wink: )

Best regards, Walter

"dependencies": {
"@tensorflow/tfjs-node": "1.4.0",
"node-red": "^1.0.3",
"node-red-contrib-browser-utils": "0.0.9",
"node-red-contrib-image-tools": "^0.2.5",
"node-red-contrib-post-object-detection": "^0.1.2",
"node-red-contrib-tf-function": "^0.1.0",
"node-red-contrib-tf-model": "^0.1.6"

[{"id":"1545e935.999977","type":"jimp-image","z":"bfd6731.db3089","name":"load the image","data":"payload","dataType":"msg","ret":"buf","parameter1":"","parameter1Type":"msg","parameter2":"","parameter2Type":"msg","parameter3":"","parameter3Type":"msg","parameter4":"","parameter4Type":"msg","parameter5":"","parameter5Type":"msg","parameter6":"","parameter6Type":"msg","parameter7":"","parameter7Type":"msg","parameter8":"","parameter8Type":"msg","parameterCount":0,"jimpFunction":"none","selectedJimpFunction":{"name":"none","fn":"none","description":"Just loads the image.","parameters":[]},"x":360,"y":140,"wires":[["8721e9e0.e949e8","d775e36d.cc825","11e3829b.dec12d"]]},{"id":"8721e9e0.e949e8","type":"image viewer","z":"bfd6731.db3089","name":"Original Image viewer","width":"320","data":"payload","dataType":"msg","x":940,"y":140,"wires":[[]]},{"id":"5606b167.a042b","type":"image viewer","z":"bfd6731.db3089","name":"With bounding boxes","width":"320","data":"payload","dataType":"msg","x":660,"y":570,"wires":[[]]},{"id":"7ceb8aa5.98cbe4","type":"bbox-image","z":"bfd6731.db3089","name":"bounding-box","x":570,"y":500,"wires":[["5606b167.a042b"]]},{"id":"dd52b9b3.e03978","type":"change","z":"bfd6731.db3089","name":"objects","rules":[{"t":"set","p":"complete","pt":"msg","to":"true","tot":"bool"}],"action":"","property":"","from":"","to":"","reg":false,"x":360,"y":430,"wires":[["11e3829b.dec12d"]]},{"id":"bcf6963a.e2f4e8","type":"post-object-detection","z":"bfd6731.db3089","classesURL":"https://s3.sjc.us.cloud-object-storage.appdomain.cloud/tfjs-cos/cocossd/classes.json","iou":"0.5","minScore":"0.5","name":"post-processing","x":360,"y":360,"wires":[["dd52b9b3.e03978"]]},{"id":"d775e36d.cc825","type":"tf-function","z":"bfd6731.db3089","name":"pre-processing","func":"const image = tf.tidy(() => {\n  return tf.node.decodeImage(msg.payload, 3).expandDims(0);\n});\n\nreturn {payload: { image_tensor: image } };","outputs":1,"noerr":0,"x":360,"y":210,"wires":[["8bee1dc1.4ecca"]]},{"id":"8bee1dc1.4ecca","type":"tf-model","z":"bfd6731.db3089","modelURL":"https://storage.googleapis.com/tfjs-models/savedmodel/ssdlite_mobilenet_v2/model.json","outputNode":"","name":"COCO SSD","x":350,"y":280,"wires":[["bcf6963a.e2f4e8"]]},{"id":"11e3829b.dec12d","type":"function","z":"bfd6731.db3089","name":"","func":"let queue = flow.get('queue');\nif (queue === undefined) {\n    queue = [];\n    flow.set('queue', queue);\n}\n\nif (msg.complete === undefined) {\n    queue.push(msg.payload);\n    node.done();\n} else {\n    const image = queue.shift();\n    node.send(\n        {\n            payload: {\n                objects: msg.payload,\n                image: image\n            }\n        });\n}","outputs":1,"noerr":0,"x":565,"y":430,"wires":[["7ceb8aa5.98cbe4"]],"icon":"node-red/join.svg","l":false},{"id":"f8135785.7c4808","type":"mqtt in","z":"bfd6731.db3089","name":"","topic":"epic","qos":"2","datatype":"auto","broker":"a5576de4.82b","x":140,"y":140,"wires":[["1545e935.999977"]]},{"id":"ff74da07.dd21f8","type":"fileinject","z":"bfd6731.db3089","name":"","x":150,"y":100,"wires":[["1545e935.999977"]]},{"id":"a5576de4.82b","type":"mqtt-broker","z":"","name":"","broker":"192.168.0.240","port":"1883","clientid":"","usetls":false,"compatmode":true,"keepalive":"60","cleansession":true,"birthTopic":"","birthQos":"0","birthPayload":"","closeTopic":"","closePayload":"","willTopic":"","willQos":"0","willPayload":""}]

Ah now I understand. It uses node-canvas for the drawing. So it inherits the canvas capabilities - good workaround for the limited drawing capabilities of jimp.

Do you mean you use now the tf-model node instead of the coco-ssd node? And is Jetson Nano a better choice (instead of a rpi 4) for this kind of stuff?

The analysis is not always correct.
E.g. the analysis of this picture results in a motorcycle instead of a car:

I have told my wife so many times that she should be careful when parking our Volkswagen :thinking:

Sorry for this amateurish interruption of a professional discussion ...

In addition, it was an almost new car! :laughing:

Yes, I too see the same problem: plants are detected as People and create false alerts through the loudspeaker of the house at 2:00 am: "There is a person in the garden" :woozy_face:
By doing research in the other examples of flow I found this one:


It is based on Teachable Machin by google. Using a webcam we take the object from various angles, backgrounds, lighting ... To improve the model, we can add other photos.
The result is a tensorflow.js model, the url of which must be entered in the detection node. Or registered locally. So simple.
To test ...

1 Like

Yes, currently, on the Jetson Nano I use the tf-model node and it is stated that the GPU is used. To make the coco-ssd-node work I suspect that it has to use the updated @tensorflow/tfs-node but I do not know how to make that happen, I have some ideas experimenting, like running "npm install" in node-red-contrib-tfjs-coco-ssd directory to rebuild the bindings, but I don't know

EDIT: I do not know but I do suspect that it is problematic to use the node-red-contrib-tfjs-coco-ssd node on a RPi4. Maybe someone already tried?

If the Jetson Nano is a better choice then the RPi4 for this kind? Well, I don't know. What I know is that the Nano can do impressive real time video analyzes at a high speed when using other tools & libraries specifically written for it. But I'm not sure if it's power is really utilized when running everything through NR.

So very true :smiley:
On the other hand, my wife once worked with a company handling customer fleets (car leasing) and you cannot believe how many calls started "My wife...". In reality, it was not the wife...

Very interesting!!!

1 Like

This decrease in inferencing time brings the Raspberry Pi 4 directly into competition with both the NVIDIA Jetson Nano and the Movidius-based hardware from Intel

For the most curious the link of the comparison (very instructive). The difference between Tensorflow and TensorflowLight is obvious:

1 Like

So I managed to make the coco-ssd-node work on Jetson Nano!!! This is how it worked for me:

Install both nodes:

In the folder "/home/user/.node-red/node_modules/node-red-contrib-tfjs-coco-ssd/node_modules/@tensorflow" delete the following folders

  • tfjs-node
  • tfjs-converter

and then replace with the same folders from the folder "/home/user/.node-red/node_modules/@tensorflow"

After this the coco-ssd node also works on arm64!

image

1 Like

I was able to test the detection of objects but on the pi3 there is a memory consumption that increases without ever going down.
After about 10 detections it blocks. It is necessary to restart NR.
Was this already mentioned-resolved in another post? If so, sorry :face_with_hand_over_mouth:
If not, do you have ideas to solve this problem?
I attach a CPU & Memory visualization to each detection.

mémoire-cpu

NR-LOG

I'm having the exact same issue in a RPi 4.

I had to write a flow that restarts Node Red once it it 60% or Ram consumed.

Which version of the node are you using ? Can you try shrinking the images before sending them to the node. (While it shouldn’t matter the models only need about 320 x 240 px)

here I have programmed a resizing of the images captured by the cameras at 320x240. Since 2 hours, I have not exceeded 90% of use (Ram 1 GB). I will let it run for a few days to see...

In an other hand, more and more people complain of memory leak when using tfjs:



It seems that using the td.tidy() method and the tf.disposeVariables function solves the problem.
@dceedjay does that mean anything to you? Can it be useful to solve our problem?

worth a try - pushed a version 0.4.1
not sure if I've implemented it correctly - so any help / advice gratefully received...

the update by the Palette went well :
image
I came back with 1920x1080 FHD images, and I see a nice improvement. I turn around the 82-84% of memory occupation (1GB RAM), remaining <90% during the work of detections. The RPI's Swap memory is not used even when spamming images. :star_struck:

The update looks promising. Wait and see in the long term ...

Thanks @dceejay

As I said above, smaller images would also help.

1 Like

I did also make some tests, added some lines to the tfjs.js file

        node.send(m);
        node.warn(tf.memory());
        tf.dispose(img);
        node.warn(tf.memory());
        tf.disposeVariables();
        node.warn(tf.memory());

The result:
tf.dispose(img) seems to clear correctly, in my case, when analyzing going up to 787 and then always returning to 786 when finished. Adding tf.disposeVariables() seems not necessary, at least seems not make any additional difference on memory
Also total memory seems to be freed up (from 111 free before analyze and 142 after when finished)


image

1 Like