Error message on restarting Node Red

Recently I´ve had problems with restarting Node Red, after a version update or a node update, taking up to 12 hours to start running. I tracked down in the logs a device that was causing to reinitiate and eliminated that but it still takes about 30-40 minutes, which really doesn´t seem logical.
I´m seeing this warning

2025-02-17T15:25:58 (node:7) MaxListenersExceededWarning: Possible EventEmitter memory leak detected. 11 EVENT_SSE_ON_UPDATE_DEVICE_STATE listeners added to [EventBridge]. MaxListeners is 10. Use emitter.setMaxListeners() to increase limit

2025-02-17T15:25:58 (node:7) MaxListenersExceededWarning: Possible EventEmitter memory leak detected. 11 EVENT_SSE_SECURITY_CHANGE listeners added to [EventBridge]. MaxListeners is 10. Use emitter.setMaxListeners() to increase limit

2025-02-17T15:25:58 (node:7) MaxListenersExceededWarning: Possible EventEmitter memory leak detected. 11 EVENT_SSE_ALARM_CHANGE listeners added to [EventBridge]. MaxListeners is 10. Use emitter.setMaxListeners() to increase limit

and although I searched in the documentation I haven´t found how to look for this - I need an id to search? I´ve used Node Red for about a year now but I lack background knowledege on many aspects so I´d be grateful for a hand from someone who knows.
I´m running my Node Red on an iHost, which is a Docker installation, but I have no CLI.

Assuming that you haven't added a node yourself that uses something called "EventBridge", I would guess this would be something related to the iHost. Is there a SONOFF iHost forum? You might need help there.

Can you tell from the logs where your remaining delay is happening? You may have to up the logging level. Such a long delay would normally relate to something trying to do a network request of some kind. Given that TCP timeouts are typically around 5 minutes, a non-responding service that is checked multiple times might cause the delay.

1 Like

Thanks for the reply - yes I will take this in to the Sonoff forum. But is there no way of knowing by (node:7) which node is being referred to?

Yes I do have an Openweather and a Stormglass connection with inject nodes ticked
"Inject once after 30 seconds" and then at 30 minute intervals.
I presume that if I stuck a delay node after that, with e.g. 10 minutes delay, that would rule that one out?

I don't think so. That reference isn't a node-id AFAIK. So only by turning up the logging levels or running Node-RED in debug mode and trying to trap the event.

In truth, it isn't an error, only a warning, and generally has no real impact. But combined with your long restart times, it potentially indicates that a node or other process is failing to close listeners correctly. But that isn't the only reason for seeing that warning I'm afraid so it might be a red-herring.

For now, disable the appropriate flows/nodes and try restarting a couple of times to see if that is indeed the issue. Do that before messing with delays. Also, it is relatively rare for public weather services to update so often, check the weather model in use, you might find it only updates hourly.

I would expect those to only make a single call each though so you wouldn't expect a delay of >5-10minutes at most. Also, as those are presumably HTTP(s) calls, the 5-minute timeout wouldn't apply since HTTP(s) typically has a much shorted timeout. It is direct TCP calls that would be more likely to have long timeouts.

You really need to turn up the logging to find out where the delay is. You might even need to turn on the "audit" option in settings. Then restart node-red watching the log and find out where you have long delays.

Also run your a system moniter (top if you are using Linux) and check whether the processor is being overloaded while it is going slow.

2 Likes

On the iHost I can see the CPU and it's surging sometimes to 100% while trying to restart.

Good call by Colin. Try to link that to progress in the log.

Well I took your advice and disabled all inject->HTTP nodes, and restart time comes down to 5-10 minutes! Getting somewhere......
Looking at the CPU on the iHost dashboard as @Colin commented I can see a couple of spikes to 100%, while logs seem to bear that out with 2 reinitiates, so I obviously need to follow this up more thoroughly on logging.
Will also be sending logs with the

MaxListenersExceededWarning

messages to Sonoff to see if they can identify any problem on their side.
Thanks to you both!

1 Like

Does it stay at 100% for a significant time? A few seconds at 100% is not unusual. If it is there for tens of seconds then there is something wrong.