InfluxDB on a Raspberry Pi

This is not strictly Node-Red related but I know that several people here are using InfluxDB on a Raspberry Pi in conjunction with Grafana to store data created/extracted by Node-Red.

I discovered accidentally that InfuxDB was using a very high amount of CPU time and after a bit of searching I found that this is due to InfluxDB's internal monitoring function. This is used for diagnosis of internal errors and can safely be turned off by changing an entry in:

/etc/influxdb/influxdb.conf

Simply set
[monitor]
store-enabled = true
to
[monitor]
store-enabled = false
to disable it.

Restart InfluxDB.

Simply check the InfluxDB usage with top or htop beforehand to see if you are affected by this.

4 Likes

Thanks. I'm using influxdb (InfluxDB v1.8.9) to store data which goes to grafana for display.
About two month ago the CPU load on my Raspberry Pi 4 went up from about 15% to 150%, and the influxd process is responsible for this.

I found the hint above about two weeks ago, and even dropped the _internal database just to be sure. It did not help a lot, if at all. So I'm still stuck with the somewhat heavy load compared to before.

In the mean time I reconfigured many of my sensors to send temperature and humidity values only once per minute instead of once every 10 seconds. With the idea to reduce the number of imported data points. But this had no effect on the influxd process load.

Therefore I'm still stuck with a Raspberry Pi4 which runs much hotter than need be, and therefore using more energy than it should.

What retention policy are you using for the data?

Have you configured any continuous queries?

Thanks for asking.

A retention policy would make a lot of sense in my case.
Some of the data could be averaged over one hour (temp, hum, solar power, power consumption,...) without loosing the basic information.

But since my informatics know-how is just barely enough to fill the ESP8266 sensors with googled code, and wiring node-red nodes, configuring influxdb retention policies is way over my head. I tried to google it, but there is a whole new terminology to learn just to understand what the DB experts are talking about.

I tried to figure out if an update to 2.x would solve the problem, but since the problem is not known or documented on the net (apart from the hint above), and since it needs authenticated connections which uses even more CPU resources (tried that, then disabled it again), I currently leave it as it is. But I'm open to test other ideas.

You are using a retention policy, but if you haven't configured one then it will be using the default, which is forever I think.

Have a look at the influx log. Depending how you have configured it, it might be in /var/log/influxdb or it might be going to syslog. I think syslog may be the default. Run
tail -f /var/log/syslog | grep -i influx
to see if there are lots of messages. Again, though, how much detail you get depends on the settings in the influx config file.

Did you look at this post?

InfluxDB introduced a few problems since version 1.8.7. It is recommended to downgrade to v1.8.6 until a stable fix comes out.

We have thousands of times more data than yours coming in every day. Using RPI3b. No single problem so far.

See if v1.8.6 solves your problem.

1 Like

I think it is unlikely to be that problem, I am sure. Once you get that version running it should be fine. It is for me anyway.

Yes, forever - I only know because I was doing some maintenance last week!

OK it is good to know. We will use the stable release version for the moment.

Thanks everybody for caring. This is why I like this place.
I posted a question in the linked forum post about how to downgrade.
All the other hints I implemented already.
What I need is a step by step recipe for dummies on how to downgrade from 1.8.9 to 1.8.6 (ideally without loosing any data points).

  • download what to where
  • install it how
  • sudo systemctl restart influxd.service ? (or something like that)
  • sudo apt-mark something (sind I'm updating my Pi regularly).

I suggest at least looking at the log first, there may be something obvious going on.

If you do decide to downgrade make an image copy of the SD card first, just in case.

Is your database on the SD card or have you got it on a USB hard disc?

1 Like

Technically the Pi4 runs from an USB drive.
It's an Argon Case with a Kingston M.2 SSD built into the bottom of the case.

I exported the influxdb data and currently I make a copy onto another drive.
Once this is finished, I will head over to the thread here:

where 'mud walker' gave me the step by step instructions.
Thanks for the help and for caring.

Seeing from the other thread that going back to the previous version has not fixed it then you need to find the real cause of the problem. If you are only writing to influx every 10 seconds then it should be using virtually no cpu at all. Since it is still running at over 100% then that means it is using a complete core. The cpu load you had previously (15%) sounds much more as it should be. Please look in the logs.

Suggestions to restore logging if it has been suppressed, in /etc/influxdb/influxdb.conf:

Section http:
suppress-write-log false
log-enabled true

Section data:
query-log-enabled true

Those will reverse the changes that I make to the config to reduce the logging to only important stuff. If you have made other changes to suppress error logging and so on then if you didn't note down what changes you made then you will just have to look at every line and make sure anything that mentions logging is enabled.

Alternatively install influx on another system to get the default settings and copy the file from there.

Colin is right. Here is a screen shot of our wireless gateway. It currently has 3 wireless temperature sensors, 2 real-time wireless vibration sensors, 5 high-speed wireless vibration sensors connected. The three temperature sensors and real-time vibration monitoring sensors are sending data continuously. The 2 wireless vibration sensors send data continuously at an interval of 1s to 10s (depending on the vibration level). The 3 wireless temperature sensors send data continuously at an interval of 10s. You can see that the CPU load is very low.

CPU load indeed will go very high when the high speed wireless vibration sensors start running, but they only run 15-30 minutes a day.

A simple way is to do a fresh installation of your system. Still you should look into details on what caused the high CPU usage.

The OP doesn't want to lose his data.

OP mentioned that he exported the influxDB data and cleaned the influxDB directory already :grinning_face_with_smiling_eyes:

OP can stop InfluxDB, and see if CPU usage drops down quickly.

If the data is to be re-imported then what is the benefit of re-installing? All that will do is to restore the config file to the default.

The OP has used top to see that it is influx that is using the CPU. If he stops influx then that must drop to zero immediately.