Everything looks fine at first sight, I noticed that memory use has dropped by more than 10%!
CPU usage is still about the same.
Also noticed new result with my command:
npm outdated -g --depth=0
it gives me:
I now got extra line of information about "corepack". after googling this feature, I cannot really understand what this package does. Can anyone tell me? See also: https://nodejs.org/api/corepack.html
If I run this command with EXEC-node then I get my info, but also error message "Command failed: npm outdated -g --depth=0"????? Running from command line, no errors.
Also I see that NPM 3.0.2 is available, why is this version not automatically installed? Maybe is not stable enough or so? Same goes for other packages.
I have always found the installation script very safe and reliable, but clearly with an important project you will have lots of backups just in case...
ps I'm not sure if the installation script will also update any contrib nodes that you installed. You can check with Manage Palette in the NR editor.
@jbudd Indeed you are right with this script, I run it and then got Node-red version 3.02.
But now I met another issue.
When I run command
npm outdated -g --depth=0
I get this result:
As you can see, now the info about node-red is gone. This is probably because node.red.isn't outdated anymore. How can I change the command so I get all the info like before, regardless outdated or not.
The update of NPM and Node-red was very successful in regard to memory usage. All the time I had memory usage increasing over a period of about 8 hours and then after reaching a certain point of about 95 to 98% some sort of garbage collection was performed and then dropped to about 65%. Now that phenomenon is over, now it stays stable between 48 to 52%, as you can see at this trend:
The CPU usage is a bit higher then it was before during the summer though, it was between 5 to 10%.
Now it is between 8% and 20% and doesn't seem to be increasing anymore. All though this is just bearable for performance, I am still wondering if it is not possible to decrease the cpu using a bit.
You originally said the issue was CPU usage, now it's memory?
Are you viewing that dashboard from a browser on the Pi desktop, or from another computer on your network?
How many data points are you sending to the chart node?
Depending on how you are obtaining the data, I think that cpu usage of 100% means 100% of just one core.
However, on my Pies, a 4B and a Zero2, with Node-red running I generally see <5% CPU on a single core.
@jbudd The issue was not really the memory usage build up, because I had that almost from the beginning.
What I didn't have was the cpu usage increasing, that was new and affected the performance.
Notice that your swap space is all used up, that means that at some point you have run out of memory, though at the precise moment that you ran top it has not run out. I suspect that the problem is at least partly caused by influxdb. How often are you writing to influx and how many data points?
If the node-red CPU is swinging wildly then something you are doing is intermittently using a lot of processor. Is it possible for you to temporarily disable sections of the flows in order for you to work out what is going on?
Are you doing image processing or something similar?
What are the plugin-blur etc modules that are referenced in the image you posted earlier?
Can you explain this question, I don't understand what you mean by that?
I rather make a lot of writings with influx for POWER calculations and trends. that is 4 floating points every 2 sec. Other specific data every 10 sec. that is about 25 fields in 4 measurements.
Most other data every minute.
But to say this yet again.
Nothing much has changed over the summer, and then CPU usage was very good.
But last week or so cpu load is increasing, and I am trying to find out why.
Look at the browser console screenshot you posted earlier, it says you are using modules such as @jimp/plugin-dither. Did you manually install those?
Go into your .node-red directory and run npm list @jimp/plugins
though this may be nothing to do with the problem you are seeing.
I am not sure whether you have said, but if you reboot does it make a difference for a while?
You said the CPU usage is varying dramatically in node-red, is the memory use also varying wildly? Either against node-red or in the MiB Mem line?
There is no way that what you have described should use that much processor, so something odd is going on.
What happens if you stop influx from running? This should do it. sudo systemctl stop influxd
use start to start it again, obviously.
Ok, now I see it in the chrome inspect window...hahaha
I have no idea what that is, and how it got there?
Indeed when I restart node-red or reboot it is OK for a while.
Memory usage is not swinging as far as I can see in top, in fact it is very constant.
.
I have stopped influx for 2 minutes and restarted again, but no effect on swinging cpu load of node-red.
Correction: at @jimp, this seems to be part of node-red-contrib-image-tools
This I need to make barcode image, but is only used once at boot.
Do you just have one of the image tools nodes in your flows? Try disabling that node temporarily. Double click it and down the bottom is a button to disable it.
You say you have 22GB in use, that is a lot of data. What have you got that takes that much space?