NR running, but all flows disabled, should the CPU load for NR process(es) not be very low?

NR running, but all flows disabled, should the CPU load for NR process(es) not be very low?

top - 23:16:15 up 45 min,  3 users,  load average: 6.96, 5.02, 7.27
Tasks:  89 total,   5 running,  84 sleeping,   0 stopped,   0 zombie
%Cpu(s): 75.5 us, 22.3 sy,  0.0 ni,  0.0 id,  0.0 wa,  0.0 hi,  2.2 si,  0.0 st
MiB Mem :    224.0 total,     31.5 free,    112.6 used,     80.0 buff/cache
MiB Swap:    100.0 total,     71.8 free,     28.2 used.     67.8 avail Mem

  PID USER      PR  NI    VIRT    RES    SHR S  %CPU  %MEM     TIME+ COMMAND
 3056 pi        20   0  153820  40916  25784 R  22.2  17.8   0:24.97 npm
 3084 pi        20   0  153424  41264  26856 R  19.4  18.0   0:23.05 npm
 3079 pi        20   0  153552  41932  27708 R  18.5  18.3   0:22.90 npm
  250 root      20   0   12872    944    748 S  14.5   0.4   5:12.35 pigpiod
 2958 pi        20   0  184904  66400  28128 R  12.7  28.9   1:30.99 node-red
 2970 root      20   0   11208   2932   2532 S   2.8   1.3   0:07.90 top
 3498 root      20   0   11236   3128   2552 R   2.8   1.4   0:00.35 top
    9 root      20   0       0      0      0 S   1.5   0.0   0:40.60 ksoftirqd/0

Can someone provide some insight on why npm/node-red threads are so busy? When I exported the flows, deleted all, leaving a new empty flow. The CPU load dropped. Import of the flows file again, all flows disabled, the CPU load remained low. So, just disabling the flows, did not really stop processes? Yes, did a deployment after disable of flows, because the saved flow file was done after deployment.

How are you running Node-RED? I can't think why an 'idle' NR instance would have any npm processes running.

Just the typical systemctl stop, restart, etc.

At first I thought I had somehow not disable the flows right, but the flows file loaded again, all already disabled. Hum, odd. I will see if I can recreate. Not a big deal just seemed odd.

PiGPIOd is using a lot of time as well, when it really has not work to do, so have to track that one down too.

pigpio daemon is a known resource hog - always has been

Hi @Nodi.Rubrum,
Perhaps you could give my node-red-contrib-v8-cpu-profiler a try. Via the example flow you can start it, and after e.g. 30 seconds stop it. And then share the .profile file here (which you can find in the path specified in the FileOut node).
Don't think I have thread it yet for sub-processes, so not sure if we can find the root cause with it.
Bart

Can he run it if his flows are disabled?

I would start by working out why npm is running at all. Do you have anything in the system (not necessarily node red) designed to automatically check node versions or something similar?

Morning Simon,
Well of course he will need to have one flow enabled, containing my example flow. Or he can start the profiler by adapting his nodejs startup command, and then connect e.g. with chrome developer tools to his nodejs instance (see here).

But I think injecting a simple start and stop message via my node isn't going to kill his performance, to be honest...

Anyway it is up to Nodi how he wants to analyze his problem further, since there are many ways to Rome ...
Bart

Why have you only got 200MB RAM total? Even a Zero should have twice that.
Part of the symptom you are seeing is that there is not enough RAM so it is swapping which will clog up the whole system. That does not explain why npm is running though.

1 Like

Yeah, the npm tasking is the real question, here IMHO. likely I have a flow triggering it, as I am writing this, I think the version checking, which is supposed to run at NR start up then stop, should spike the loading but back off. But what I saw today was longer than what the flow design would suggest. Moreover, the odd thing as I disabled flows I did not see the processes stop but continued until I restart NR, and the flows came up as disabled. My expectation was that if I disable a flow, and deploy the flow, I should see the applicable processes not exist or continue. It could be, the deploy was somehow incomplete, and flows I disabled did not really get disabled.

I just happen to discover this on a Pi Model 1B... which I use for testing flows under limited resources. The actual flows are pretty light, driving a few diodes via PiGPIOd, display a few system resource informational items, like current CPU, memory, IO stats. Use Pi 2B for testing, but serious flows only on 3B and now in a few cases 4B. I just re-imaged a new SD card, and will working through step by step to see what might be triggering the scenario. Already noticed PiGPIOd was doing something interesting while the Pi should pretty much be idle. Fired off a question to the PiGPIOd author about it as well.

I suspect what is happening is that the version checking is starting, but because you do not have enough memory it takes a very long time time (maybe tens of minutes). If you then disable the flow and redeploy it may not be killing the npm processes which are running. I suggest disabling that feature and rebooting and making sure npm does is not running.

I did not think even a 1B had only 256MB RAM, I am pretty sure it has 512, so something odd is going on there.

https://forums.raspberrypi.com/viewtopic.php?p=281257

So 256MB was an option.
I think I would advise against trying to run anything other than an absolute minimum node-red with that amount of memory. Certainly don't try to run npm at the same time as node-red.

I run my speak to Alexa flow on mine

1 Like

I think you are getting a bit near the limit there, you have used most of your swap up!

The very first Model B units had only 256MB memory. I just happen to have one of these rarer units. These were produced in February 2012. There were even some Model A units with only 256MB produced in 2013, but these did not have the same connectivity options as the early model B, so only purchased the B model at the time.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.