node-red $ npm i node-red-contrib-tfjs-coco-ssd
npm error code ENOTEMPTY
npm error syscall rename
npm error path /home/pi/.node-red/node_modules/body-parser
npm error dest /home/pi/.node-red/node_modules/.body-parser-2MfwB2tK
npm error errno -39
npm error ENOTEMPTY: directory not empty, rename '/home/pi/.node-red/node_modules/body-parser' -> '/home/pi/.node-red/node_modules/.body-parser-2MfwB2tK'
npm error A complete log of this run can be found in: /home/pi/.npm/_logs/2025-12-07T00_48_12_036Z-debug-0.log
This is a problem that npm has occasionally, leaving temporary files lying around after a problem of some sort.
Delete that folder, and any other similar ones in the node_modules folder with a random string on the end such as the one in the error. Then try again in the command line so that we can see more easily if there is still a problem. Then restart node-red.
Tried downloading again but no error message just killed.
pi@piZero2:~ $ npm i node-red-contrib-tfjs-coco-ssd
â ´Killed
pi@piZero2:~ $
I had successfully installed it a few days ago on piZero2 64 bit piOS Trixie, the only difference this time was that I had not only freshly installed the OS but also increased the swapable ram to 2.0Gi. Various other Node red nodes loaded without problems only the Coco-ssd failed.
pi@piZero2:~ $ free -h
total used free shared buff/cache available
Mem: 416Mi 268Mi 131Mi 9.2Mi 81Mi 147Mi
Swap: 2.0Gi 246Mi 1.8Gi
pi@piZero2:~ $ swapon
NAME TYPE SIZE USED PRIO
/dev/zram0 partition 2G 245.9M 100
pi@piZero2:~ $
So, for further testing I've commented-out the instructions for expanding the swap size in the /etc/rpi/swap.conf.d/80-custom-size.conf file that I use, but all to no avail:
pi@piZero2:~ $ swapon
NAME TYPE SIZE USED PRIO
/dev/zram0 partition 416M 166.9M 100
pi@piZero2:~ $ sudo nano /etc/rpi/swap.conf.d/80-custom-size.conf
pi@piZero2:~ $ npm i node-red-contrib-tfjs-coco-ssd
â ™Killed
pi@piZero2:~ $ npm i node-red-contrib-tfjs-coco-ssd
npm error code ENOTEMPTY
npm error syscall rename
npm error path /home/pi/node_modules/accepts
npm error dest /home/pi/node_modules/.accepts-FE35LBXw
npm error errno -39
npm error ENOTEMPTY: directory not empty, rename '/home/pi/node_modules/accepts' -> '/home/pi/node_modules/.accepts-FE35LBXw'
npm error A complete log of this run can be found in: /home/pi/.npm/_logs/2025-12-07T17_24_55_262Z-debug-0.log
pi@piZero2:~ $
Incidentally removal of the empty node-red-contrib-tfjs-coco-ssd folder from the node_modules overcame the "Killed" issue.
Something is killing npm during the install, which is leaving the temporary folders so you get a different error next time round.
What do the commands node -v
and npm -v
give?
Have you run out of space on the SD card?
How long does it take after starting the install the the Killed message appears?
Open two command windows and in one run sudo tail -f /var/log/syslog
and run the install in the other (after removing the temporary files again). See if there are any error messages in syslog.
If not then do the same again, but this time run top
in the other window and watch to see if there is anything unusual. You would expect to see npm using all the processor for some time.
I have successfully installed it on a Zero. However, even if you managed to install it I suspect that a Zero will not have anywhere near enough power to do anything useful, though I have no direct evidence of this.
My SD card has 21Gb free memory. I have previously found that flows using the coco-ssd node would run OK for a while on a piZero2, but then crash. I perhaps naively considered that this may be due to the node being so resource hungry and that it uses up the physical RAM and even exceeds the default swap memory of 100Mb on Bookworm (which I increased to 1Gb using diphys-swapfile. Hence now when using Trixie the decision was to see if reliability of the flow could be improved by increasing the size of the swapable RAM beyond that of the physical RAM on the piZero2 to 2Gb. Trixie uses /etc/rpi/swap.conf or configuration snippets in the /etc/rpi/swap.conf.d/ directory to control swap size. I am aware that the swapsize is often recommended at 1.5 to 2x the physical ram, so maybe 2Gb was over zealous, but even after altering the configuration snippets back to the default size, and rebooting, coco-ssd failed to install.
OK stopped fiddling with swap and successfully installed coco-ssd on a pristine pi OS 64bit Trixie. However, I do get this warning 2025-12-08 07:39:02.085374: W tensorflow/core/framework/cpu_allocator_impl.cc:82] Allocation of 8640000 exceeds 10% of free system memory.
The scenario that I am using your node is as follows. I have a raspberry pi Zero with a picam running motionEye. Another raspberry pi runs the main flows and when the rpi with the picam saves a motion file the rpi with the main flow is notified via a webhook from motionEye. The main flow then extracts the url for the still image on the picam rpi and sends url via an MQTT link to the rpi Zero 2 to analyse the image. The objective is to only receive notifications on my mobile of images containing people, as previously I was getting notifications every time a cloud cast a shadow!
Apparently the zram drive is compressed, thus offering more space than expected, at the cost of extra CPU usage.
Generally on a small Pi I have a couple of CPU cores doing nothing so this is no loss.
Swapping is faster since no SD card access is involved, and the setup greatly increases swappiness, making more of the available memory free at any one time.
The setup script allocates 50% of RAM to swap space, leaving just 0.25GB on the Zero2
This is a zero 2 running Bookworm and zram. It has the standard /dev/swap, almost never used, plus zram in memory.
pi@Zero2Pink:~ $ free -h
total used free shared buff/cache available
Mem: 419Mi 201Mi 27Mi 0.0Ki 189Mi 162Mi
Swap: 624Mi 2.0Mi 621Mi
pi@Zero2Pink:~ $ cat /proc/swaps
Filename Type Size Used Priority
/var/swap file 102396 0 -2
/dev/zram0 partition 536748 2304 15
And this is running a fresh Trixie Lite 64 bit. I can't understand the size of zram0 but it looks like they have abolished swap space on the SD card and gone for just zram.
pi@Zero2Red:~/.node-red $ free -h
total used free shared buff/cache available
Mem: 416Mi 215Mi 115Mi 500Ki 141Mi 200Mi
Swap: 415Mi 36Mi 379Mi
pi@Zero2Red:~/.node-red $ cat /proc/swaps
Filename Type Size Used Priority
/dev/zram0 partition 425980 37180 100
If only there was a Node-red benchmark test I could tell how much difference it makes and workloads for which it is unviable.
OK so I had got completely the wrong end of the stick so to speak. I had assumed that Zram on Pixie still used external memory but compressed to speed up transfer rather than partitioning the device's physical ram.
I have given up trying to get anything useful from the Pi documentation or forum.
I only know that on Trixie the installation of zram failed because it was already running.