Looong time user of nodered here, and have been amazed by the performance even under my elaborate and inefficient flows over the years. So I have been spared having to find bottlenecks and optimizing much.
I recently remade several old flows and noticed that nodered slowed down after a little while to almost not managing to (for example) reload the UI or update HomeKit at all. I may very well have done something stupid somewhere, but it could also be one of a couple of changed out nodes.
I’ve watched logs and tried to disable this and that, but can’t do it for long at a time as it controls heating, doors etc in my house. But can’t keep restarting nodered all the time either.
So, do any of you gents have any tips for digging into performance and what nodered is doing at the moment and what it’s using the most time on? (That would be a nice sidepanel node!)
Also check that it is actually NR causing the issues. A common problem is to allow a database to grow too large at which point the in-memory indexes chew up too much memory and you start to get swap issues.
I’ll look into whether I can pinpoint that. Thanks. There’s only slow (edit: as in little, merely trickling) traffic to InfluxDB, which has been fine for years. But I do quite a lot with the global context, come to think of it…
That is exactly an issue that I had a few years ago when I forgot to turn on the auto-deletion features of one of by db's in InfluxDB. It suddenly started making the Pi3 very slow.
I’m mostly just with my phone these days at work, and forgot to grab a screenshot of top when I seemed to solve the issue by incessantly reviewing old logic with some Gin Tonic and desperation late last night. Recommended.
I admit InfluxDB has just been set and forget to me, but in case of settings that made it progressively slower over time, I assume it would show it was the culprit(?).
Anyway, for completeness, it seems I have an old flow that just happened to produce a multitude more messages than before on account of seemingly unrelated changes to my pi system, and then I also caught that I have a possible infinite loop under unforeseen circumstances. I’m amazed NR still diligently tried to hold up It shows that code is never “done”.
Any ideas for a notification system for worsening machine performance for the future?
You may know and prefer general, mature and professional tools for (js) flow performance review, but I still feel as if there’s a possible hole here to fill in the node-red ecosystem. It’s not entirely transferable but I know Svelte, Vue, etc. have their own purposebuilt browser extensions which includes some performance stats. That or just a little info in a sidepanel would be golden.
———
(PS! Unfortunately I, myself, still have to Google every time I even edit a Function node. I love it, but this thread shows that using core nodes as building blocks instead of condensely putting it all in Function nodes, is safer and often more efficient to my class of users, but more cluttered and less elegant and fun. But on the other side the aforementioned core node flows may get “too many” tempting connection points rather than how naturally “isolated” a Function node is, to haphazard old geezers like me. But that’s another matter and thread perhaps.)
Personally, I use Telegraf along with InfluxDB and Grafana to produce system performance dashboards. I have them for the server but also for the network as well so I can spot when things are going wrong with my Internet and other connectivity. Grafana includes the ability to have alerts.
Telegraf is easily configured to report on lots of interesting information about systems and services.
But I also output the Telegraf system data to MQTT so I can also use Node-RED to monitor for low memory, high cpu or whatever is needed.
So I get ongoing trends from the InfluxDB data with Grafana and instant alerting from Node-RED via Telegram messages.
Heck, don't worry about that, I've been doing this for years now and I absolutely still have to look up the references for loads of common commands and functions.
A lot depends on your style of thinking. While I am somewhat of a visual thinker, my training is old-school so language based logic is rather ingrained. Being somewhat down the autistic spectrum, I also get easily distracted by cluttered visuals. So for me, the greatest strength of Node-RED is that I can use it to quickly and easily do some heavy lifting such as connecting to a data service like MQTT, reading files, triggering on a schedule and so on and then passing the data to a function node where I will do more complex but rather isolated logic processing. I regularly find myself experimenting with core nodes but then consolidating into a function node for clarity.
But the joy of Node-RED is that you can work in ways that are best for YOU.
I wholeheartedly agree, can relate, and really appreciate the reply. Are there any future NR conferences or meetups in, let’s say, Barcelona, or anywhere a good bit south of Norway, with access to good bars?
Telegraf sounds like an excellent idea! With the added bonus of utilizing InfluxDB already running on the same machine more fully. I’ll make a note of looking into it.
Sorry, no, InfluxDB wasn’t involved in my case. I realize that by previously writing “only slow traffic to InfluxDB” that could get misunderstood. I meant there is currently very little, merely trickling, traffic to InfluxDB. In my case it was several old flows going haywire because of some small changed circumstances on the machine (and an unforeseen logical “edge case” resulting in endless loops). Takes some effort to pinpoint though, especially on age old flows.
It just needs retention policies setting to avoid this issue.
For example:
# Limit the domotica db to 24hrs data by default (.8*60*60*24=69,120 69,120*11*9=6,842,880
CREATE RETENTION POLICY twenty_four_hours ON domotica DURATION 24h REPLICATION 1 DEFAULT
# Create 2nd Limit for the env_daily table for 5yrs (24*365*5=43,800 ). 5y = 5*365=1,825 d
CREATE RETENTION POLICY one_week ON domotica DURATION 1825d REPLICATION 1
Create a continuous query if you want to keep aggregated long-term data from short-term details. Such as:
CREATE CONTINUOUS QUERY cq_60min ON domotica BEGIN SELECT mean("value") as value,max("value"),min("value"),max("value")-min("value") as range INTO domotica.one_week.env_daily FROM environment GROUP BY location,type,time(60m) END
I would love such a tool as a noobie to Node Red (and Linux and most everything else).
I've been using Node Red on my Victron Cerbo GX for about four days now and happily coding along in likely the most inefficient way possible while being pleased with my results. Until earlier today when I couldn't get my Dashboard 2.0 to load up nor Node Red and it got me wondering if the flows were still executing as they should to start (and more importantly) stop my generator and such. And I wondered if my Cerbo GX was getting crushed to the point where my entire off grid solar energy system might be affected (seems it's not, I think the Cerbo GX must limit the resources Node Red is allowed to consume).
I had one single chart tracking two devices and set to remember 7 hours of data. Maybe that's asking too much, I don't know, but I changed it to 7 minutes. Didn't seem to help much.
I then realized I had many nodes that were sending data every 5 seconds instead of on change and I fixed that. After that, performance seems pretty good.
Would love the tools you suggest to monitor the resources Node Red is using and I hope I can learn more to figure out the many things I'm doing wrong. I really can't have my Cerbo GX crashing when it's below 0F out here off grid stumbling in the dark. I need my power and my lights!
If anybody is interested, here are my flows after four days of Node Red experience. flows.json (127.7 KB)