Object Detection using node-red-contrib-tfjs-coco-ssd

the update by the Palette went well :
image
I came back with 1920x1080 FHD images, and I see a nice improvement. I turn around the 82-84% of memory occupation (1GB RAM), remaining <90% during the work of detections. The RPI's Swap memory is not used even when spamming images. :star_struck:

The update looks promising. Wait and see in the long term ...

Thanks @dceejay

As I said above, smaller images would also help.

1 Like

I did also make some tests, added some lines to the tfjs.js file

        node.send(m);
        node.warn(tf.memory());
        tf.dispose(img);
        node.warn(tf.memory());
        tf.disposeVariables();
        node.warn(tf.memory());

The result:
tf.dispose(img) seems to clear correctly, in my case, when analyzing going up to 787 and then always returning to 786 when finished. Adding tf.disposeVariables() seems not necessary, at least seems not make any additional difference on memory
Also total memory seems to be freed up (from 111 free before analyze and 142 after when finished)


image

1 Like

so I got lucky first time ! yay !

2 Likes

@dceejay
yes, very good!!!

I have another "thing" I would like to ask about. As I wrote earlier, there is a problem on certain platforms, like arm64. I managed to solve this with a fix but as soon as I update the node, my "fix" is of course overwritten and I have to do the delete-copy-paste of folders again in the @tensorflow module

This particular problem is obviously related to some platforms like the arm64. Don't know if the RPi4 is affected as well or if your node works out of the box?

I'm just thinking out loud here, could it be possible to have some kind of platform identification when installing the node? It is the dependency of @tensorflow I think is the part that would need this

First workaround could be not to touch @tensorflow at all if it is arm64 and maybe just present a note that this has to be installed separately. Ultimate solution is of course to detect the platform and then select the proper procedure

Just my thoughts

Actually, installing another node (using in this case the palette manager) also messed up the things. I installed the node-red-contrib-bool-gate and afterwards, the node-red-contrib-tfjs-coco-ssd did not work. I found that the tfjs_binding.node file in "/home/user/.node-red/node_modules/node-red-contrib-tfjs-coco-ssd/node_modules/@tensorflow/tfjs-node/lib/napi-v5" had been removed during the install

Copied it back from my saved folder and restarted the flow helped, all working again

Sometimes luck makes it work the first time, it's nice when it happens :face_with_hand_over_mouth:

HI @wiredquill,
can you try to update :

At this time it resolve memory leak on RPI3 and ARM64 devices
Let us know if RPI 4 consumme memory again ?

@dceejay Merci for the correction!
And @SuperNinja Thanks for the info.
I just updated and YES, memory remains stable. To be continued in the coming days.
But clearly that's better. See the graph below.
At each detection, there's a CPU and memory spike, it's normal.
But now the memory goes down with each detection. Cool.

Capture

1 Like

Hi @krambriw
I did in fact raise this with the tfjs guys - https://github.com/yhwang/node-red-contrib-tf-model/issues/7
so hopefully they will think about it. In theory it may be possible for "us" to do it - but really it should be part of that project. (I'm happy for any PR if anyone wants to do it of course)

Hello @dceejay
Thank you, I read what you raised, I think also, as recommended, it is better they add something on their side. Until then, I know how to "repair" with the simple fix I already have

Just wondering, did your "standard" node install work out of the box on the RPi4 or is it necessary to use the dedicated fix as they explained?

Hi @krambriw,

apparently @wiredquill tested with RPI4, he also had memory leak problems. It would be nice if he could answer us about it.

no idea - I don't have a Pi4 - will let you folk tell me.

Yes, maybe it worked since there is still no 64-bit kernel running in the RPi4 if you go for Raspbian. But one day maybe...

sorry for the delay, Emergency at work.

I update to 4.1 a day and a half ago and it system has not had to reboot since.
I'm currently sitting at around 25% men utilization on 4 Gig Raspberry Pi.

I was not downsizing the images, I was sending it 1920x1080 images directly from my security cameras.

I'll keep an eye on it over the weekend.

1 Like

@wiredquill

Hello,
when you installed this node on your Raspberry Pi 4, did it work right away or did you have to apply the fix described here?

For the Raspberry Pi 4, you need to provide a file named custom-binary.json under the scripts directory with the following contents:

{
  "tf-lib": "https://s3.us.cloud-object-storage.appdomain.cloud/tfjs-cos/libtensorflow-cpu-linux-arm-1.15.0.tar.gz"
}

I manualy installed @tensorflow/tfjs-node. The Flow worded fine the first time I tried it installing via the GUI. The only reason I ran the manual install was to get ride of the error msg below.

I did not need to manually create custom-binary.json

However, do receive get this error in my node-red-log

============================
Hi there :wave:. Looks like you are running TensorFlow.js in Node.js. To speed things up dramatically, install our node backend, which binds to TensorFlow C++, by running npm i @tensorflow/tfjs-node, or npm i @tensorflow/tfjs-node-gpu if you have CUDA. Then call require('@tensorflow/tfjs-node'); (-gpu suffix for CUDA) at the start of your program. Visit https://github.com/tensorflow/tfjs-node for more details.

Yes I also have this message on RPI 3B. Is it interesting to do it?
Someone tried?
Is the gain visible?

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.