I noticed that repetitive deployments increase the CPU load. I noticed it first on 0.19.5 but it still holds for 0.20.2. Stopping and starting Node Red reduces the load to a normal level.
I did several (like 20 to 30) deployments during 2000 and 2300.
I'm running NR on a Raspberry Pi 3 running Raspian, Node v10.15.0.
You also need to check some other metrics on your Pi. Such as SWAP use. Because the Pi's typically use SD cards for storage, paging things from memory to swap and back has a significant overhead.
Without understanding a lot more about what is running on the Pi, how active things are and so on, it is hard to make a judgement.
I quite extensively use the alexa-local and openhab2 functionality.
Are there any good hints on how to analyze the node.js activity in terms of event loop and synchronous activity, e.g. what is it doing. Something like a sampling or stacktrace of the running process?
Oh, sorry. The graph is misleading: The CPU load on the machine is comfortably below 20%, I'm just filling up with the idle percentage (the space between the orange and the green line).
The effect (user load increases with each deployment - the orange line) is 100% reproducible and is not caused by other workloads on the machine.
As knolleary suggested, it is most likely that one node or other isn't freeing up it's resources correctly when we re-deploy (i.e. not shutting down old ports, links, listeners, etc correctly). However, given the extensive list you have provided - finding the one (or several) - is going to be hard. The brute force approach is to remove them one at a time, and re-measure - but that will no doubt break your flow as well.
Would I be right in thinking that it is likely that if @guardiande was doing partial deploys rather than full deploys and still sees the issue, then that would indicate that it is likely to be to do with nodes being re-deployed?
I think that could be a clever way, just move a node a bit in the editor, make a partial deploy, check. Repat for all nodes one by one, hopefully the blaming one is found
I generally recommend glances which is a Python-based tool. Although it takes a fair bit of processor itself, it is great for a quick peek into what's happening.
If you need longer-term monitoring, you could do worse than use a combination of InfluxDB, Telegraf (from the same people, which can monitor lots of different things and logs the outputs to all sorts of places including InfluxDB) and Grafana to give you the charts and dashboard - that's what produced the charts I showed.
You should check that it is the node-red process, rather than something associated with node-red. It is not inconceivable that it is influx for example as it is could be affected by the re-deploys. Or perhaps one of the added nodes uses a server of some sort that is being re-invoked so you end up with multiple copies of it.
This is a concert of collectd on the Pi for collecting the system statistics, Influxdb for storing the time-series data and Grafana for visualizing it.
Meanwhile I did some tests: Restarting the flows repeatedly as possible with NR 0.20 expectedly causes the load to increase. I then deployed only changes flows and did that with every single flow that I have repeatedly several times. This causes no load increase.
I suspect the configuration nodes to cause the increase.
To go further, you can use the --inspect flag and use a Chromium based browser - I'm using Vivaldi or use the debug features of VScode - documented on my blog and elsewhere.