MaxListenersExceededWarning: Possible EventEmitter memory leak detected. 11 scanStop listeners added. Use emitter.setMaxListeners() to increase limit #22

Hi

I have a flow in node red that after some time stops initialising the BLE scanner node. I have a delay loop activating the scanner (noble node) switching between true and false at a set interval. Everything works well initially but after a while the scan node remains on 'started' state and nothing updates. The node red console indicates "MaxListenersExceededWarning: Possible EventEmitter memory leak detected. 11 scanStop listeners added. Use emitter.setMaxListeners() to increase limit noble node red."

I have read that this can be changed in the node js but don't know what file I need to change. Any help would be great

Thanks

Which node are you using to do the scanning? The issue will likely be inside that node and for its author to fix.

I posted this in the github but didnt get a reply. I dont know what else I can use or if I can change the authors script to increase the max listeners.

Any help would be great :slight_smile:

I wouldn’t hold your breath waiting for a response. That node looks like it has been abandoned. Last update was over 3 years ago.

I'm slightly surprised that the node stops working, but I haven't looked at the code. The message is only a warning, not an error, but there may be something that responds to it and shuts the node down. It's less likely that you are really running out of memory, because the symptoms would be worse.

The immediate problem could be fixed by doing as the message says:

(The default limit is 10 listeners, which is why the node complains when the count reaches 11.)

If the warning persists for large values of setMaxListeners, the node may not be doing its housekeeping properly and leaking memory as a result.

Where do we set this emitter.setMaxListeners() thing ?

It has to be set in the node that is emitting the event, which means editing the source of the node that is producing them. But as noted it's normally just a warning.

1 Like

I get these warnings too.

click me

@WhiteLion this is a generic warning, so by itself, it doesn't give us enough information to help. In your flows, have you got more than ten instances of any particular contrib node?

I think I do have. I realy rape node-red: my (light) switch flow (I ve 12 of them) has 166 nodes .... about 20 dashbord about 80 function nodes with 3 - 40 lines of code each.... some is debug, delay, subflow ...

this is how it looks like

Mos would say its a bad use of node-red ... but I dont care, it works for me :slight_smile:

Did you find where to set this ? I have the same trouble and I do not find where to define it... Thank you !

Sorry to resurrect this thread, but I found that one of my nodes suffers from the same culprit. If I know which node causes it, that would really help me forward. This is the output from my log:

(node:17) MaxListenersExceededWarning: Possible EventEmitter memory leak detected. 11 connecting listeners added. Use emitter.setMaxListeners() to increase limit
(node:17) MaxListenersExceededWarning: Possible EventEmitter memory leak detected. 11 connect listeners added. Use emitter.setMaxListeners() to increase limit
(node:17) MaxListenersExceededWarning: Possible EventEmitter memory leak detected. 11 close listeners added. Use emitter.setMaxListeners() to increase limit

How can I find what node:17 is?

Well. one way would be to disable parts of your flow to narrow down the suspect. You could also add a catch node connected to a dbug node (set to display the 'complete msg object') and see if it gives any more information.

What nodes are you using in the flow?

Thank you for your response.

Disabling parts of my flow looks like a lot of work, given the many number of tabs involved. I think it's especially hard because there seems no consistent way to reproduce the warning messages in my case, so maybe it's not a big problem after all?

Anyhow, I'm using lots of modules, so posting the list here might help cross-reference it with others to narrow down a (or more) possible suspects:

"md5": "^2.2.1",
"node-red-contrib--cron-pkjq": "0.0.1",
"node-red-contrib-bigtimer": "~2.7.3",
"node-red-contrib-bool-gate": "~1.0.2",
"node-red-contrib-cast": "~0.2.17",
"node-red-contrib-castv2": "~4.0.3",
"node-red-contrib-eiscp": "0.3.3",
"node-red-contrib-elasticsearch-jupalcf": "~1.1.14",
"node-red-contrib-harmony-websocket": "~2.2.6",
"node-red-contrib-huemagic": "~3.0.0",
"node-red-contrib-influxdb": "~0.5.4",
"node-red-contrib-json": "~0.2.0",
"node-red-contrib-moment": "~4.0.0",
"node-red-contrib-msg-speed": "~2.0.0",
"node-red-contrib-openhab3": "^1.3.3",
"node-red-contrib-simpletime": "~2.10.0",
"node-red-contrib-smb": "~1.2.0",
"node-red-contrib-solaredge": "~0.1.0",
"node-red-contrib-throttle": "~0.1.6",
"node-red-contrib-timed-counter": "0.0.4",
"node-red-contrib-timeframerlt": "~0.3.1",
"node-red-node-aws": "~0.2.0",
"node-red-node-darksky": "~0.1.19",
"node-red-node-email": "~1.8.3",
"node-red-node-random": "~0.3.1",
"node-red-node-rbe": "~0.2.9",
"regression-fork-norounding": "1.4.0-11",
"thingzi-logic-timers": "~1.1.3",
"trend": "0.3.0"