Memory leak ?... trouble performance now

@TotallyInformation , thanks for your input. I try to keep good use of databases. I have a "real-time" database with standard retention of 8 hours. And at midnight I store my daily values in long term database.
Then I have some other values I want to keep a bit longer then 7 days I have 2 databases for that perpose.
About the continious query I don't know, I still have to figure out how that works, among other things.
The memory problems in Node-red, I already had before I started using influx. I started using influx because I had several trendings running directly and that I backed up in files. I thought the trending was taking up much memory, and so I started to think about database to store the data.

afbeelding

There is this 'Environment="NODE_OPTIONS=--max_old_space_size=512"' when using systemd https://raw.githubusercontent.com/node-red/linux-installers/master/resources/nodered.service

I would also recommend trying quite an aggressive retention policy as @TotallyInformation described. Doing so allowed me to run influxDB for a couple of years on a Pi4B with 64GB EVO Plus sdcard without any problems at all.

Look at top again, whatever node you are using is showing the free mem and total figures, instead of of mem available.

Why? It is not short of memory.

@Colin I get this information from nodes provided by

> node-red-contrib-cpu and > node-red-contrib-os
I also read about garbage collection and how to monitor with > node-red-contrib-gc that gave me graphical info about this, see pic.Time frame is 15 min.

afbeelding

@ozpoz, I make use of the same SD card you mention. As I said, I try to keep the load on influx low.
But Am I to understand, that all of you have the same kind of behaviour on memory, that's mounting up until almost max. and then resets again (by lack a better word)?

Is it possible that the level of monitoring is contributing to the problem. How many points are you showing ?

I have not experienced this.

Showing where?

Dashboard graphs or tables.

It isn't, all you are seeing is Linux making good use of as much memory as it can. It tries to keep the free memory as low as possible. If you were really running out of memory then you would see swap being used.
This may help to explain it Why does Red Hat Linux report less free memory on the system than is actually available? - Server Fault

These in particular can be a real drag on performance.

Much more efficient to let InfluxDB do that for you.

Here are some examples:

		# Limit the domotica db to 24hrs data by default (.8*60*60*24=69,120 69,120*11*9=6,842,880 
		CREATE RETENTION POLICY twenty_four_hours ON domotica DURATION 24h REPLICATION 1 DEFAULT
		# Create 2nd Limit for the env_daily table for 5yrs (24*365*5=43,800 ). 5y = 5*365=1,825 d
		CREATE RETENTION POLICY one_week ON domotica DURATION 1825d REPLICATION 1
		
		# Aggregate details to 60minute data
		CREATE CONTINUOUS QUERY cq_60min_environment ON test BEGIN  
			SELECT mean(value) AS value, max(value), min(value), max(value) - min(value) AS range  
			INTO test.five_years.environment_hourly  
			FROM test.one_week.environment  
			GROUP BY location, type, time(1h) 
		END
		
		CREATE CONTINUOUS QUERY cq_60min ON domotica BEGIN SELECT mean("value") as value,max("value"),min("value"),max("value")-min("value") as range INTO domotica.one_week.env_daily FROM environment GROUP BY location,type,time(60m) END
			SELECT mean("value") as value,max("value"),min("value"),max("value")-min("value") as range INTO domotica.one_week.env_daily FROM environment WHERE time >= 0 GROUP BY location,type,time(60m)
			# SELECT mean("value") as value,max("value"),min("value"),max("value")-min("value") as range  FROM environment WHERE time >= now() - 1d GROUP BY location,type,time(60m)
			# SELECT *  FROM environment WHERE location = 'HOME/IN/00/HAL' AND type = 'DewPoint' AND time >= '2016-04-28 15:00:00' AND  time <= '2016-04-28 16:00:00'
			DELETE FROM environment WHERE time < '2016-04-26'
			select count("value") from domotica.'default'.environment

I think there is a post in the forum somewhere from me where I give a much better and more up-to-date example.


Here we go, this is better:

Need more detailed information on influxdb - General - Node-RED Forum (nodered.org)

1 Like

To reduce the cpu (spike) load by influxdb significant you can disable the database tables internal used by influxdb.

Edit /etc/influxdb/influxdb.conf and find the section [monitor]
Uncomment the line store-enabled and give it the value false.

[monitor]
  # Whether to record statistics internally.
    store-enabled = false

  # The destination database for recorded statistics
  # store-database = "_internal"

  # The interval at which to record statistics
  # store-interval = "10s"

Restart infuxdb sudo service influxdb restart

Influxdb will now stop using the tables for internal use, use at your own risk.
Note from the influxdb website:

Set to false to disable recording statistics internally. If set to false it will make it substantially more difficult to diagnose issues with your installation.

1 Like

@Colin , Yes, Indeed, I can see it's more complex then I Thought, although I don't understand all of it.
I think I leave this subject to rest, because I maybe I shouldn't worry about it....
The panic struck when at the same time the performance went down. :grinning: But that was explainable by a mistake I made.

Anyway thank you all for your input, @ozpos, @TotallyInformation, @jbudd . etc.
I think you guys are doing very well with your suppport to us!!! Keep it up !!!

@TotallyInformation , thanks for the examples. I will study this at a later time, at this moment it looks quite complex to me, I am only a novice at databases and Node-red.
Like I said in the beginning of this post, I don't have issues with performance (yet) or influx. It works just fine for me right now. Also I am reaching the end of the functionality of my application. The only worry I have at this moment is the use of memory and the strange behaviour I notice, the saw thooth graph and build up of memory use, I do not really understand this. I just wonder if there is any way to avoid this or make it more stable. Am I doing something wrong to get this behaviour. I get this info from the nodes I use, like I mentioned before. But I found out this info is derived from command

> free -m

How many times do I have to say that there is nothing unusual about this and it is nothing to do with node red. Look at the amount of memory node red is using and you will see that.
Any well used Linux system will show that behaviour.

@Colin . Well that was the assurance I was looking for!

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.