Memory leak, what am I doing wrong?

They are not old

Due to the slowness (deliberate) of the Debian software release cycles, and that node-RED is developing fast, they are the supported way of keeping it up-to-date on a Raspberry Pi

If a contrib node doesn’t work, then that needs fixing

Nodejs v4 is no longer supported and is not getting any security updates.

It is recommended that you run the LTS version of nodejs

Are you passing your images ( as a buffer) through the rbe node? If so how have you configured your RBE node?

The rbe node:

“id”: “2e37060d.1bdf4a”,
“type”: “rbe”,
“z”: “b63cd4a9.77fbd8”,
“name”: “”,
“func”: “rbe”,
“gap”: “”,
“start”: “”,
“inout”: “out”,
“property”: “payload”,
“x”: 230,
“y”: 100,
“wires”: [

Its input is the ftp-server node’s output, its output goes to the input of my filter function which either drops the buffer by returning null, or passes it to MQTT output which is subscribed to by the python AI script by doing return msg. This is ultimately the reason I posted the topic here to find out if I’m failing to do something that is needed to trigger garbage collection.

Still too early to tell, but since I installed v4.9.1 of nodejs about eight hours ago node-red is only using about 15% memory, usually it was 40% or more by this point with v4.8.2

Function is more important than “security updates” as without the function I could have 100% ironclad cyber security by not running the system!

Security and IOT stuff is a nightmare at the moment. My solution (I don’t trust FLIR corporation’s website to “gateway” my DVR images either) is strong passwords and having the DVR and the Pi3B+ on a private network pushing the AI images to my cell phone via “double NAT” behind my ISP provided router. That is also why I keep Raspbian updated with apt-get update ; apt-get dist-upgrade

There is an issue open on this at the node-red-contrib-ftp-server github.

I can’t run what don’t work, the node-red-contrib-ftp-server is supposed to work with 6.x so I will try that if v4.9.1 doesn’t fix the leak. I’ve no idea why Raspbian Stretch is distributing node-red with v.4.8.2

OTOH with the stability of the python AI and the auto-restart of node-red, even with the leak its only a couple of minutes of “down time” every 30+ hours so its tolerable as I’m not guarding Ft Knox!

The nodered.service file in /lib/systemd/system/ contains the --max_old_space_size=256 parameter that is used to set when the Garbage collection kicks in… BUT it is only a guide as the GC in node.js is “lazy” in that it doesn’t kick in until the limits are exceeded - so a) it will always appear like it is leaking memory as the GC won’t kick in all the time and b) if you are near the limit and then handle a few “large” objects then you may well blow way past it. We picked 256 to try to be too greedy with Pi memory but feel free to tune as you see fit.

Will have a look at that ftp node - but yes not ideal if they aren’t fixing it yet,

Debian just update their base packages very slowly - so Stretch just happens to be stuck on 4.8.2 - their next release (2019) will be (currently) based on node 8.x - but by then that will be mid-life (with 10.x being the current LTS version…) - Sadly - in order for us to have Node-RED pre-loaded on Pi we need to sit on what is there already so 4.8 it is - for now. But as other have pointed out - it is now beyond end of life and not getting security patches - so not recommended for anything other than initial exploration.

Thanks for the information about where the setting is for when garbage collection kicks it, I will try reducing it if 4.9.1 hasn’t fixed things. It seems better, but still too early to tell, I won’t know for sure until probably Friday unless it runs out of memory again sooner.

My DVR is great at 24/7 HD recording and pretty useless for everything else. My “snapshots” come in bursts of about 15/second for typically 10-30 seconds and then typically several minutes of no images. Largest size is typically 160-180K/per image on in bright light, typically 70-90K/image at night under IR, at night there are few false PIR activation, usually not really false since when I looked at the 24/7 record its always seemed to be a neighborhood dog, cat, raccoon, etc. that triggered it.

I won’t delve into the boring details but I’ve PIR motion detectors covering the the camera field of views and “trigger” the snapshots when a PIR goes active. Unfortunately its all or nothing from the DVR so so my filter is discarding all the images that are not from cameras covering that PIR which is most of them since its quite rare for multiple PIR to be active unless someone is actually walking from one field of view to the other (they overlap by design). But since all the cameras are looking at outdoor scenes I get bursts of false PIR activations – on the west and east sides as the sun rises and sets, on the north and south sides as the sun passes directly overhead and the whole thing is made worse by days like today with bright sun, and fast moving clouds.

So I’m hitting a peak of activity now where I get false PIR activation every 4-20 minutes and node-red is still below 20% memory usage. This time yesterday it was over 80% and ran out of memory a few hours later.

I do want to compliment the node-red developers as I find the default behaviors very well thought out and despite the fact node-red has been dying every 30-36 hours it took me a few days to realize it because of the auto restart. I was actually adding another flow to monitor “will” messages from the MQTT broker that was sending the PIR states, otherwise I might have remained blissfully unaware that node-red was dying periodically.

So far since midnight, 642 images have been sent to the AI by my filter with two detections (when my wife left for work this morning) This means probably over 2500 images have passed through my flow with over 1800 discarded.

As I said I will try 6.x if this is still “leaking” and will upgrade to 8.x or 10.x as soon as node-red-contrib-ftp-server is fixed or I can find a replacement. From looking at the node-red-contrib-ftp-server github source it looks like the issue is in the require() statement of the nodejs libraries he used:
var _ = require(‘lodash’),
FtpServer = require(‘ftpd’).FtpServer,
ip = require(‘ip’),
path = require(‘path’),
memfs = require(‘memfs’);

If anyone cares, I’ve put a simplified version of my Python Movidius AI detection script and node-red flow with all the ugliness to deal with my FLIR/Lorex DVR ripped out, up on github:

But be aware my instructions and wiki are a mess at the moment as I’ve not yet got the hang of github markup.

Looks like this is the underlying issue -
so yes it would seem to say node6 should still work with it… - but uurgh what a car crash.

Thanks for the info, I’ve passed it on to the author of node-red-contrib-ftp-server via his github comments section on the open issue.

Very encouraging sign. Memory for node-red got up to 88% during the worst of the evening false PIR activation around 4-5PM, then a rain shower moved in, and stopped the false PIR storm and memory decreased to 32%

It has never never decreased more than a few percent during the v4.8.2 near monotonic march towards OOM. So v4.9.1 may contain the solution to my memory leak issue.

Time will tell.

1 Like

Hello wb666greene

I know what I proposed you on a previous post is not a solution but maybe provisionally helps you…Most probably you are not using the desktop right? If this is the case disable X as this will relief a lot of resources from your ram.

From other side and also something provisional… why you don’t limit the amount of trigger messages per minute or use some debounce or similar?


As I said, X has been disabled since my initial development when it was very useful to to have it so I could see input and output images side by side. Once this was working I disabled X and it now runs “headless”.

There is no “debouncing” possible as the PIR sensors read temperature changes across a field of view and these are inevitable when the PIR field of view is outdoors. Believe me, it still is far fewer images than I would have to run through the system if I was using the FLIR/Lorex incredibly poor video motion detector to trigger the snapshots sent to the AI.

The fundamental problem is the lameness of the FLIR/Lorex DVR I want every snapshot image it sends from a camera covering the PIR vield of view so any “rate limit” is a non-starter, problem is its all or nothing, meaning if it sends anything it has to send the snapshots from all the cameras. I’ve seen “better” (no-name) DVRs that have a separate trigger input for each camera, I regret not buying one of these models, but that is water under the bridge.

Just a little follow up. The author of node-red-contrib-ftp-server has updated it to work with 8.x, but apparently the nodejs ftpd module is not working with 10.x.

I’ve upgraded node-red-contrib-ftp-server and nodejs to 8.x and if anything the leakage is a bit worse than with v4.9.2, but in any event its not a serious problem as after node-red crashes for lack of memory systemd restarts it in a a few seconds which is inconsequential in the overall scheme of things since my Lorex DVR has a 4+ second latency between triggering and sending a snapshot.

Its been running great despite the periodic crashes and restarts. I hope to try reducing the the --max_old_space_size=256 paraneter this weekend to make the garbage collection more aggressive. I hope my understanding of this parameter is not reversed :slight_smile:

For me I noticed there is memory leak when the flow has debug node connected to it, even if it is disabled. I didn't have time to drill into the root cause though...

Hi @wb666greene,
Did you make any progress on this topic?
I'm also experiencing similar issues, see my other (remotely related) topic: Inspecting/Capping (per-Node?) memory usage.
Thank you!

I've not found a solution. Despite updates to node-red, nodejs, and the contrib nodes I use it still dies about once every other day.

But the out of memory killer restarts node-red and everything is running fine again within 10-15 seconds. My system has been running 24/7 since my last post in this thread and I've never noticed the issue save for examining the log files I save daily.

I haven't tried removing the debug nodes, I'm still making a few tweaks and I need the nodes for testing/verifying any changes made. Maybe in a week or two I'll been done with the minor changes suggested by a fw months of actual real-world use, and then try removing the debug nodes.

I see. Thanks for sharing all the details, including your AI project.
(Which sounds pretty pretty cool, BTW!) :grinning:

I've deleted all debug nodes and the node-red memory usage still seems to be monotonically increasing. If it doesn't get OOM killed (I see it in the log files), I'll follow-up, but so far debug nodes or not doesn't seem to be the issue here.

I seemed to have perhaps accidentally solved my memory leak.

In short, I had a serious and mysterious issue with my Lorex security DVR. To clear the problem I had to reformat the DVR and reconfigure from backup. While troubleshooting this, I messed up my node-red flow and had to delete it and restore from my exported json file backup. In the process I decided to add a bunch of extra debugging and clean some things up.

The net result is my flow appears to have stopped leaking memory after these changes.

The only change affecting the flow of the jpg image buffers from the ftp server was to remove an RBE node.

I'd had this in originally because I seemed to be sometimes getting duplicate images from the DVR. Without the RBE node filtering the buffers my flow has been running for two days and the memory usage has remained 14-16% throughout. I removed it because my new extra debugging code never showed any duplicates being sent now.

Pretty much everything node-red has been upgraded once or more since I started this thread. My other major "cleanup" was to remove some duplicate MQTT input nodes and instead pass these states via flow context instead of duplicate subscription nodes.

I hate it when problems mysteriously go away, so I'm wondering if there are any known issues with RBE node and binary buffers as payloads?

The suggestion earlier in this thread that debug nodes were involved in the leaks doesn't seem to be the case as I now have more debug nodes than ever, and the prior instance had none.

1 Like

Well it's never been reported until now :slight_smile:
but yes it would have to hold onto the old image until a new one arrives in order to compare the two.
Also it works across different topics - so if you had changing topics on the msg (maybe for some ID or other) - then there would be an old image per topic... which if they had something like date in (for no good reason I can think of :slight_smile: then it could potentially grow indefinitely.


I believe that this explains it! The topic is the filename from the ftp client (my Lorex security DVR) and it has the time and date embedded as a very hard to read string as part of the filename

Before I discovered the node-red-contrib-ftp-server node I had been trying to use vsftp as the server and a watch node to get the filenames to feed to the AI as they came it. This was giving me duplicates which the RBE mostly eliminated, but there was a race between vsftpd and the watch node that I never could eliminate that sometimes caused incomplete jpegs to be sent on for processing.

Thanks, I feel a lot better now that the mystery seems solved!

Now for future reference, if this situation comes up again, it looks like the reset message could be the solution if done at appropriate intervals. From re-reading the info pane for RBE node, am I correct that if I send a message with an empty topic and msg.reset added to the message object it would clear all the topics? Does msg.reset need any particular value? Presumably this reset would release all the storage.

1 Like

Yes it would. Or just delete or move msg.topic to another property like msg.filename before rbe..

1 Like