Hello Guys
I am working on large node-red project in Raspberry PI Zero W.
That project contains over 1900 nodes and MQTT communication with C application.
But I see connection lost almost of time.
Also I have many 'time interval' nodes, but real time interval is irregular.
I tested it on RPI4, it works well without any connection lost and I can see regular time interval in same node-red flow.
I need to implement this project on RPI zero W.
Following is CPU usage on RPI zero W.
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
258 daytech 25 5 343200 241920 26368 R 77.8 54.6 66:08.55 node-red
993 root 20 0 19220 1928 1688 S 17.4 0.4 6:16.56 i2c_publisher
1024 daytech 20 0 10184 3044 2560 R 1.9 0.7 0:35.83 top
1059 root 20 0 0 0 0 I 1.3 0.0 0:01.32 kworker/0:2-events_power_efficient
896 daytech 20 0 12856 4540 3204 S 0.6 1.0 0:05.20 sshd
1010 daytech 20 0 12188 4212 3432 S 0.6 1.0 0:07.36 sshd
7 root 20 0 0 0 0 S 0.3 0.0 0:36.22 ksoftirqd/0
128 root 20 0 0 0 0 S 0.3 0.0 1:35.25 spi0
277 avahi 20 0 5888 2888 2572 S 0.3 0.7 0:07.47 avahi-daemon
300 root 20 0 27640 1344 1216 S 0.3 0.3 0:05.08 rngd
1025 root 20 0 0 0 0 I 0.3 0.0 0:20.38 kworker/0:1-events
1 root 20 0 33564 7600 6228 S 0.0 1.7 0:09.07 systemd
2 root 20 0 0 0 0 S 0.0 0.0 0:00.01 kthreadd
If that is a typical output from top then it is unlikely to work as you are at 78% cpu and 55% memory just for Node-RED. Usually you should be aiming for less than 10% . As we have no idea what you flow is trying to do - all we can suggest is that you do less of it... either less scale (fewer nodes) or less speed (fewer messages) or less complex (again fewer nodes).
Hello dceejay
Thank you for your reply.
I have noticed when number of interval nodes is reduced, the frequency of it's connection lost is low a bit.
But time interval is still irregular.
Is there not the solution for resolving this issue?
So it runs fine on a Pi4 - an ARM v8, 1.5 GHz, 1GB, 2GB or 4GB machine (you didn't specify) but not so well on the Pi Zero W - an ARM1176, 1GHz, 512MB machine, or to put it another way, it works on the fast machine, but not the one that has half (or 1/4 or 1/8) the memory and runs 1/3 slower.
I would suggest that you try optimizing your flow as @dceejay suggested
Hello Zenofmud
Thank you for your reply.
I have just found the interested thing.
Once I removed "Aedes MQTT broker" node in node-red flow, there is no connection lost.
But CPU's usage is still 60%.
MQTT communication between node-red and C application is issue?
Hello Zenofmud and dceejay
Thank you for your reply
I need your help for data exchange between C application and node-red.
I read the data from I2C slave in C application and then published to node-red using MQTT on current solution.
But this has some issue on connection, as you knew.
Before I use MQTT, I used file operation for it, but it missed some data because read/write synchronization.
The data received from I2C data need to be processed within 30ms at least.
Besides of MQTT or file operation, what method can I use for my application?
What makes you think that the problem is with MQTT?
There is no way on a pi zero running Raspbian that you will be able to guarantee 30ms action. It is not a real time operating system and sometimes it will be busy doing other things for longer than that.
There is no answer to that question because it depends on what else is going on, and what you mean by 'possible'. As I said, Raspbian (based on Debian) is not a Real Time OS, so nothing is guaranteed. I the Pi is not heavily loaded (I suspect yours is heavily loaded) then usually response would be better than 10ms I think, but occasionally it may go away for large fractions of a second, or even more. There is no way of knowing whether it is good enough for your requirement other than trying it.
Can you explain some more about your use case? For example you mention it runs fine on a pi4, but it has to run on a pi0. Why does it need to run on that pi0 specifically? Since it has far fewer capabilities than the other pi models, what you’re doing here (you mentioned a huge number of nodes in your opening post), might simply be too heavy to run on a pi0. What part of your use case makes it so important it runs on that, and would it be possible to delegate tasks instead?
I use a pi0w myself to test parts of flows for performance occasionally. If it still runs moderately smooth on it they’re good to go. But once you feel the delays in seconds, not just milliseconds, things have to be changed.