General capacity of a RPi3

What is people’s general experience with running lots of background services on a RPi3? And when and why did you decide to distribute them across several Pis?

I am currently running a 6 flow Node-RED with about 10 3rd party nodes and a java command line app in a daemon node, Homebridge with 2-3 plugins, Mosquitto, Deconz (zigbee lights), and Home-assistant (via homeassistant-websocket node for just one particular HA plugin which reads an API every 5 secs).

Wondering if this is starting to approach a general limit? It’s running at 60°C but resources are jumping up and down. Grafana and InfluxDB on top of this is perhaps pushing it?

EDIT: The Node-RED has many instances (multitudes more than the 10 types of 3rd party nodes) of native and 3rd part nodes across the flows.

There was a recent discussion that included talk about temperatures. I think that they are more to do with your enclosure and environment.

On my 2 Pi's (I'm slowly migrating from a Pi2 to a Pi3), I run:

  • Node-RED (the Pi2 has hundreds of nodes, not counted recently).
  • Mosquitto
  • InfluxDB (~400MB of databases)
  • Telegraf - captures Pi runtime stats to InfluxDB
  • Grafana - A number of complex dashboards
  • Doubtless other stuff I've forgotten about

Even the Pi2 has no problem at all dealing with all of this. The only issue I get is that sometimes I get a CPU spike, mainly from InfluxDB which can cause some minor process delays. We are talking fractions of a second though so nothing drastic. The Pi2 has an RFXtra433E and a wireless (433MHz) serial port board.

The Pi3 runs a (currently) smaller set of flows but also runs Webmin with auto-package updates.

I find it amazing that such a small, cheap computer can do so much.

1 Like

Rather than upgrade at this stage, is your system running optimally?

For example see this thread about throttled CPU due to low supply voltage.

Also look at the demands made on the pi, can they be optimised.
I also run numerous iot interactions + Influx + Grafana + Mosquitto etc, and just one such example of optimisation is;

I monitor power used, solar power generated & power diverted from a 5 second feed, and was then running a query to crunch that data to get energy per day (kWh/d). This was done every 5 seconds. That's almost 52,000 individual power/timeframe calculations, which were then added together, every 5 seconds.

Instead, I moved the power/timeframe calculation to node-RED, and converted the feeds realtime to W/s - just 1 calculation per feed per 5 seconds. This was then saved to Influx.
To then get kWh/d, all Grafana needs to do is grab the W/s datapoints for previous 24hrs and simply add them together.

This dropped system demands dramatically, and made a significant difference to the speed at which Grafana presented the dashboards.

1 Like

All good points Paul.

One mistake I see people making a lot is to overcook the reporting rate for IoT. So people read environment sensors every second when it wouldn't make any difference if they read it every 5-10 minutes since it isn't like there is anything you can do about it that quickly and the rate of change will be slow anyway.

1 Like

Thank you both for thoughts and shared experience.

I doubtlessly have potential for optimisation, especially in Node-RED in all my custom function nodes for different collecting and calculating. And some dual overhead. That’s why I emphasised “in general”:blush: I shouldn’t be too worried if I work on optimisation and add Grafana then.

1 Like

Just a note that it is probably unlikely that node-red is the critical factor in CPU throughput. So before you start optimising at least study top for a while to determine what is actually using the CPU, and in fact whether you need to do any optimisation at all.

2 Likes

It doesn't sound to me as though it is reaching a limit. It is very common on Linux for CPU to jump around since there are potentially hundreds, probably thousands of services that might kick in at some point in time.

In top or similar, check out the "load" which has 3 values. If they are all consistently less than 2-3 or at least rarely going over then you certainly don't have an overloaded device.

This is from my Pi2:

image

glances is a Python based monitor that gives a much clearer display. It actually uses quite a lot of resources itself it it is a good test. With it, I can see my CPU % jumping about from 2% to just over 20%.

The other thing to check is the utilisation of SWAP. Glances shows that my Pi2 has allocated 2.7% of my 1000MB swap space. But on the Pi3 it is around 73% of 100MB, a little over twice the Pi2 in absolute terms.

SWAP utilisation is particularly critical on the Pi's because you are using a REALLY slow filing system on the SD card. SWAP space is memory that has been temporarily written to disk so that something else can use it. It then has to be read back in. If the service being swapped is important - like Node-RED for example, then you will get periodic CPU spikes and slow performance.

You can use things like InfluxDB, Telegraf and Grafana to capture and report on utilisation over time.

Here are graphs of Load average and CPU % over time on my Pi3:

And here is memory and swap utilisation:

This gives a MUCH clearer picture of what is going on and it is easy to see that, while I have some swap utilisation for some reason, the overall load is very low. Load on my Pi2 actually isn't a lot higher on average but it tends to peak higher, even reaching 1 occasionally. CPU on the Pi2 regularly goes over 50%:

1 Like