This is not strictly Node-Red related but I know that several people here are using InfluxDB on a Raspberry Pi in conjunction with Grafana to store data created/extracted by Node-Red.
I discovered accidentally that InfuxDB was using a very high amount of CPU time and after a bit of searching I found that this is due to InfluxDB's internal monitoring function. This is used for diagnosis of internal errors and can safely be turned off by changing an entry in:
/etc/influxdb/influxdb.conf
Simply set
[monitor]
store-enabled = true
to
[monitor]
store-enabled = false
to disable it.
Restart InfluxDB.
Simply check the InfluxDB usage with top or htop beforehand to see if you are affected by this.
Thanks. I'm using influxdb (InfluxDB v1.8.9) to store data which goes to grafana for display.
About two month ago the CPU load on my Raspberry Pi 4 went up from about 15% to 150%, and the influxd process is responsible for this.
I found the hint above about two weeks ago, and even dropped the _internal database just to be sure. It did not help a lot, if at all. So I'm still stuck with the somewhat heavy load compared to before.
In the mean time I reconfigured many of my sensors to send temperature and humidity values only once per minute instead of once every 10 seconds. With the idea to reduce the number of imported data points. But this had no effect on the influxd process load.
Therefore I'm still stuck with a Raspberry Pi4 which runs much hotter than need be, and therefore using more energy than it should.
A retention policy would make a lot of sense in my case.
Some of the data could be averaged over one hour (temp, hum, solar power, power consumption,...) without loosing the basic information.
But since my informatics know-how is just barely enough to fill the ESP8266 sensors with googled code, and wiring node-red nodes, configuring influxdb retention policies is way over my head. I tried to google it, but there is a whole new terminology to learn just to understand what the DB experts are talking about.
I tried to figure out if an update to 2.x would solve the problem, but since the problem is not known or documented on the net (apart from the hint above), and since it needs authenticated connections which uses even more CPU resources (tried that, then disabled it again), I currently leave it as it is. But I'm open to test other ideas.
You are using a retention policy, but if you haven't configured one then it will be using the default, which is forever I think.
Have a look at the influx log. Depending how you have configured it, it might be in /var/log/influxdb or it might be going to syslog. I think syslog may be the default. Run tail -f /var/log/syslog | grep -i influx
to see if there are lots of messages. Again, though, how much detail you get depends on the settings in the influx config file.
Thanks everybody for caring. This is why I like this place.
I posted a question in the linked forum post about how to downgrade.
All the other hints I implemented already.
What I need is a step by step recipe for dummies on how to downgrade from 1.8.9 to 1.8.6 (ideally without loosing any data points).
download what to where
install it how
sudo systemctl restart influxd.service ? (or something like that)
sudo apt-mark something (sind I'm updating my Pi regularly).
Seeing from the other thread that going back to the previous version has not fixed it then you need to find the real cause of the problem. If you are only writing to influx every 10 seconds then it should be using virtually no cpu at all. Since it is still running at over 100% then that means it is using a complete core. The cpu load you had previously (15%) sounds much more as it should be. Please look in the logs.
Those will reverse the changes that I make to the config to reduce the logging to only important stuff. If you have made other changes to suppress error logging and so on then if you didn't note down what changes you made then you will just have to look at every line and make sure anything that mentions logging is enabled.
Alternatively install influx on another system to get the default settings and copy the file from there.
Colin is right. Here is a screen shot of our wireless gateway. It currently has 3 wireless temperature sensors, 2 real-time wireless vibration sensors, 5 high-speed wireless vibration sensors connected. The three temperature sensors and real-time vibration monitoring sensors are sending data continuously. The 2 wireless vibration sensors send data continuously at an interval of 1s to 10s (depending on the vibration level). The 3 wireless temperature sensors send data continuously at an interval of 10s. You can see that the CPU load is very low.