I’m using a Raspberry Pi 3B+ to add AI “person detector” to a commercial video security DVR. Its streaming protocols are proprietary so I use its “ftp snapshots” feature as the source of images to analyse.
I use a simple node-red flow with node-red-contrib-ftp-server to get the images into a msg.payload buffer and pass them into my AI python script via MQTT. When the AI, running on a Movidius NCS, detects a person in the image the Python script writes the image with a box drawn around the person to a USB memory stick and posts a message via MQTT with the path to the file. A second node-red flow does one of three things with this MQTT input message depending on the alarm system state – ignore it, play an audio alert using espeak, or send a text message and Email with the photo attached
It all works well but because of the lameness of my security DVR I have to pass three or four times the number images through the node-red-contrib-ftp-server than I want passed into the AI. So I’m wondering if I need to do something to specifically free a buffer at the end of a flow or trigger garbage collection? The msg.payload buffer either terminates in the MQTT output node that passes it to the python script or it is discarded in my filter function which returns a null to the MQTT node instead of the msg.payload with the buffer.
After from typically 20-36 hours node-red dies, it restarts automatically and starts working again, no harm to the python script and the AI detection resumes after node-red restarts. Top shows my node-red flows start at about 15% memory usage and continues to increase, rarely settling back a few percent, eventually settling into about 85% where it hovers for many hours until there is no more RAM available and presumably the OOM killer stops it, after which systemd seems to restart it and the cycle repeats.