Local context storage not intended for large data arrays?

My server is a new fanless built. I repeatedly had it freezing over the past 2 months. The case was super hot every time that happened. Seems like this very issue must have hogged my CPU and overheated the system beyond what heatpipes could handle.

Node red will only hog one core so that on its own should not overheat it, also a modern processor will cut its frequency down if it gets too hot, so again it should not overheat. If it is stopping then something else is going on.

In order to find a better solution to your overall problem, the first question is whether you actually need to store all that data in the first place. What is in the data and why do you need to keep it?

It's historic market data. I run simulations through the array, which gives me roughly a month of data to play with.
I once had it set up with s SQL database, but that was such an incredible pain in node red. Everything was do much easier once I used globalcontext storage instead.

Have you been watching the issue I linked to? Context is not going to work with such a huge amount of data. Not at the moment at least.

Just read through it now. Thanks for raising the issue by the way.
Didn't understand quite all of it :smiley:, but I am glad to at least know that the way I have things setup, it will never survive a reboot.
I have learned a lot while using the disk stores, so I feel optimistic I can amend my flows with "flushing" to SQL for backup, while retaining memory context store workflows.

It is possible that such large in-memory data may be causing paging - that would happen on different threads/cpu's. Also possilbe that the large-scale changes may be triggering sudden large garbage-collection activities.

You would need to run some system monitoring to see what is happening.

Unfortunately, it has to be said that Python is much better at handling this than Node.js is. In Python with suitable libraries, this would be trivial.

Now 1/2 G is a heavy chunk to shuffle. I haven't tried but what if you save it via MQTT and the retention flag set? Would that kill the broker??

That would not be a good idea - it really is not what a broker is optimised for.

I have a similar use case. Every 5 minutes I get 4 different status messages from each remote device that I am keeping track of. Through an update query I save all 4 to the same row using an "ON DUPLICATE KEY UPDATE" SQL syntax. Because I have a date/time field it is very easy to delete the oldest row when I add a new one. The SQL ends up being fairly easy. I can share through DM if you wish. I think this would eliminate any issues you may be having by trying to keep such large arrays in memory.

That would be very useful, thank you. Yes please. I already have postgres running for something entirely different. Played around yesterday with storing JSON's and had some initial successes. Your example will be super helpful.

The code below is a little cryptic because I do some JSON extracts into variables before the SQL. We have 4 different update status' coming from our remote devices every 5 minutes. the first 3 characters of the JSON string is the update type. So I extract that into the variable strJsonTopicType. and that matches the field names in the table. The table key is the device serial number "SN" plus the device datetime. So when an update occurs and no key is found it creates a new row. If the key is found it then does an update. So eventually I get all 4 updates into a single row by using this method. I hope this all makes sense.

msg.topic = "INSERT INTO tblRawJson_v2 (" + strJsonTopicType + ",stickSN,stickDateTime) VALUES ('";
msg.topic = msg.topic + (msg.payload) + "','" + jsonTmp["DeviceSN"] + "','" + jsonTmp["DeviceDT"];
msg.topic = msg.topic + "') ON DUPLICATE KEY UPDATE " + strJsonTopicType + " = '" + (msg.payload) + "';";

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.