I don't think it's useful, I have to record more logs (200 per hour) with a 10 second poling and then do the average every 15 min, then later that data will be sent to a SQL DB
You asked how to use Node-RED's memory and I answered that. It is more than capable of handling large amounts of rapidly changing data. The main limit being your device's memory and available compute power.
Had you asked for ideas on using something other than Postgress, nearly everyone in the forum would immediately have said: InfluxDB. It is made exactly for that kind of processing of time-based data and can easily aggregate over time.
With InfluxDB, you simply send all the data into a table and let the engine do the aggregation.
Sorry for the lack of clarity. I will still use postgres because I need its table_15min, do you think the InfluxDB+Postgres solution is more performant?
I'm thinking of installing postgres on another machine. Create the two tables on influxdb and then copy table_15min_influxdb to table_15min_postgres. Do you think it's a winning choice? I'm thinking of more solutions
So if you have to use Postgres, do the aggregation in node-red using context vars and then use a prepared statement to write the 15min data, that will be the most efficient.
TotallyInformation meanwhile, thank you for your support. Maybe I don't understand the utility of context variables; as far as I know the Flow or Global variables can be retrieved from other unconnected nodes, but I don't understand how to dynamically load N variables and then make an average. Suppose I need to load 50 values ​​to global.abc variable, I don't want to create global.abc1, global.abc2.... global.abc50
You can save them in an array, all 50 in one array variable. Each time a new one comes in then get the current array, add the new one to it, and save it again.
Yes, that is correct. The confusingly named "context" context variables are only accessible to the node that created them. I've made several attempts in the past to get context vars added to the list of options the same way that global and flow are because I think it would be sensible to offer them as an option in many places. Unfortunately, I've not managed to convince the core devs of that utility. So where I need the option, I've had to craft my own interface.
As Colin points out, you don't need to do that. That would create multiple variables. You only need 1.
In a function node, it would look like this:
// Get the global var or create an empty array if it doesn't yet exist
const abc = global.get('abc') || []
// Push the new value onto the end of the array.
abc.push( msg.payload )
// Restrict the max entries - removes the oldest entry
if (abc.length > 50 ) abc.shift()
// You can recalculate your average or some other aggregation here...
// Try doing an internet search for "mdn reduce".
// Save the updated array
global.set('abc')
You could go much further by making your variable an object where abc.data becomes the array and abc.average contains the average of all the entries. Or whatever you need to do.
In what way does it not work? If you want to see abc you can look in the context tab in the right hand pane, or you could add the line msg.payload = abc
before the line return msg
Yes, certainly. If you have >1 storage type, you adjust the get and set functions, adding the name ("memoryOnly" or "file" in your case) as an extra argument to the function. That is provided as a string so you can replace the fixed string with a variable.
const store = 'file'
const abc = global.get('abc', store)
Also make sure that all uses of json_spooler refer to the correct store. If you use different stores then it will refer to different flow variables with the same name, but in different stores.