Graph data from/to database: making it non-volatile


In this tutorial it’s shown how to use SQLite DB with Node-RED.

In this flow example it’s shown how to stitch and sew nodes in order to get non-volatile graphs:

Node-RED by default keeps the data in the graph node until restart or in set time window (by points or by days). Of course, the data is volatile and disappears on node-RED restart.

HomeAssistant, for instance, has default creating DB table per every added “sensor” and keeps the data automatically. As well the data is presented automatically as non-volatile and even scrollable. Not saying homass is better (once I abandoned it :slight_smile: ), however automated non-volatile graphing is nice feature.

I wonder if there is elegant way to keep the data from the graph node and load it back on Node-Red restart, rather than stitch-knit-weave all the nodes around?

Thank you.



You would have to write the data to a database or file and restore it when NR starts up…just like HomeAssistant does :stuck_out_tongue:



Try the node-red-contrib-persist nodes. They seem to work reliably and are a neat solution.




This is being worked on, check out the roadmap. At the moment, most of us either use MQTT with the retain flag (if there isn’t too much data) or a database.

If you check out the WIKI for node-red-contrib-uibuilder, you will see a function node that does simple caching of data and that could be further adapted to include saving to a database. I have, on my backlog, an idea for another node to work in conjunction with uibuilder that will move that example into a proper node with full cache control and probably the ability save to disk or db. But unfortunately, I have no time right now to write it.



By reading the docs, the persist nodes save and read a particular message. It’s still not automatic data management. Thank you for the idea.

Reading… Regarding term “database”: I used this word just as closest feasible implementation, not as requirement. It can be just a file or even marks on stone :slight_smile:

That’s definitely not expected. We are exchanging ideas here, not working each for other :slight_smile:
I should be able to code myself whatever needed. Just it’s too pity to reinvent the wheel. It’s better to reuse wheel of someone (and possible improve it) => more progress for every one.



Hmm, a robot arm with chisel? Could be onto something there!

Ah, sorry, that was just me expressing my own wishful thinking. I really wish I did have the time.

Anyway, my recommendation would be to use one of the existing file nodes for now with the expectation that Node-RED itself will offer persistence for global/flow/context variables in the - hopefully not too far distant - future on the roadmap to v1.

In my own HA flows, I’m using a combination of MQTT persistent messages and InfluxDB along with a few global variables that are either defined in settings.js or in a flow that is triggered on startup on the first tab.



You can use the output from the chart node to save it’s state into a file if you like and then restore it whenever you like

see -



+1 for the option that dceejay told ya, for a simple graph is the most easy and practical.




I’d already suggested node-red-contrib-persist nodes at the beginning of this thread, which do exactly that (plus more), but the idea has been dismissed by @igrowing for some reason…



Thank you for the example. It works “out of the box”. I’ll try to fit it to multiline charting.

Hi Paul, I apologize for being ambiguous. I missed the point that @dceejay demonstrated: the conversion from/to JSON. This is why pure persist didn’t work for me.
Persist has an advantage of automatic save of each message and restore of it.
However, the save is done into the same file on each message. It could wear out the SD card quickly. Therefore, I think the user-driven or time-interval driven save will do both: saving the data and saving the SD card from wear out. The cost, thus, is couple mode nodes of interval and inject.

Thank you, guys!



As alternative, in the mean time, put in a usb memory stick if you have a free port, then configure the file nod to use a file on the stick instead of the SD card (in a RPi running Stretch I find the usb stick by default mounted in /media/pi)



@dceejay 2 questions regarding your flow example.

  1. In your chart “Use deprecated data format” is ticked. Is this crucial for transform to JSON?
  1. Thinking of saving SD card wearout. I could aggregate messages. It seems “batch” node is not the most appropriate node (no concatenation required). The chart itself is able to gather data: this is defined in chart node field “X-axis”. Therefore, it’s just filter required, which passes only 1 message in certain time to file-saving node (let’s say once in hour or in a day). “Schedule filter” node is not good too: it changes the topic of messages. What is the appropriate node to pass 1 message in certain time? (I can write function for that, however, reuse written code is better that reinvent the wheel).

Thank you.



Absolutely not. Was just an artefact of the time I created the example. Should not need to be ticked



You could use the Join node (with some additional nodes). The Join node has the useful feature "create a merged Object" that can be used in this case.

The data from the Chart node is an Array so first you need to create an Object. If you use a function node, this code will do

msg.payload = {"mychart":msg.payload};
return msg;

Every time your live data in the graph gets updated, the new & complete Array is sent to your function node and then (as an Object) further to the Join node. A timer could be used to trigger a msg.complete at regular interval sent to the Join node. Your chart data is then sent from the Join node to the File node and saved.

If you restart (or make a full deploy), the saved data is read from the file, converted back to an Array using a function node with this code:

msg.payload = msg.payload.mychart;
return msg;

and sent to the chart node and the chart is restored to its latest saved state



Good idea, thank you Walter.
Meanwhile I took “delay” node, set it to limit rate of messages to 1/hour and discard all the messages in the middle.
Testing this case. Will check out your case too then.



Yes, simple configuration indeed. Only disadvantage I can see in that solution, if I’m correct, is that when you finally save, readings/data from the last hour is not included (but will be the next save) so you will always lag one hour behind



I'm not using any kind of conversion from the graph to the file and it works anyway, any data is lost at all, as soon as you inject new data on the graph is directly recorded as a new object on the file.

[{"id":"4f72b3ec.99e90c","type":"inject","z":"38ad6844.92e088","name":"","topic":"","payload":"","payloadType":"date","repeat":"30","crontab":"","once":true,"x":170,"y":200,"wires":[["e163a8b1.3a85f8","8cb44d76.7e6dc","9df7fd8b.4bae3"]]},{"id":"e163a8b1.3a85f8","type":"exec","z":"38ad6844.92e088","command":"free | grep Mem | awk '{print 100*($4)/$2}'","addpay":false,"append":"","useSpawn":"","timer":"","oldrc":false,"name":"FREE MEMORY","x":390,"y":660,"wires":[["9b2c84c9.754928","72d6a339.bf38ac"],[],[]]},{"id":"72d6a339.bf38ac","type":"delay","z":"38ad6844.92e088","name":"","pauseType":"rate","timeout":"5","timeoutUnits":"seconds","rate":"1","nbRateUnits":"1","rateUnits":"minute","randomFirst":"1","randomLast":"5","randomUnits":"seconds","drop":true,"x":610,"y":720,"wires":[["d7c8843f.8671a8"]]},{"id":"d7c8843f.8671a8","type":"smooth","z":"38ad6844.92e088","name":"","property":"payload","action":"max","count":"60","round":"1","mult":"single","x":760,"y":720,"wires":[["25cbbbc.1178c44"]]},{"id":"25cbbbc.1178c44","type":"delay","z":"38ad6844.92e088","name":"","pauseType":"rate","timeout":"5","timeoutUnits":"seconds","rate":"1","nbRateUnits":"1","rateUnits":"hour","randomFirst":"1","randomLast":"5","randomUnits":"seconds","drop":true,"x":910,"y":720,"wires":[["dd4d5366.cc1ef"]]},{"id":"38ae260f.c2f4ba","type":"inject","z":"38ad6844.92e088","name":"","topic":"","payload":"","payloadType":"str","repeat":"","crontab":"","once":true,"x":530,"y":800,"wires":[["2b088023.1a67f"]]},{"id":"2b088023.1a67f","type":"file in","z":"38ad6844.92e088","name":"ram_free","filename":"/home/pi/.node-red/datalog/ram_free","format":"utf8","chunk":false,"sendError":false,"x":680,"y":800,"wires":[["c811a38.3e56f6"]]},{"id":"c811a38.3e56f6","type":"json","z":"38ad6844.92e088","name":"","pretty":false,"x":910,"y":800,"wires":[["dd4d5366.cc1ef"]]},{"id":"dd4d5366.cc1ef","type":"ui_chart","z":"38ad6844.92e088","name":"Ram free tracking 7 days","group":"6977e773.ba2438","order":14,"width":0,"height":0,"label":"Ram free tracking 7 days","chartType":"line","legend":"false","xformat":"D/M","interpolate":"linear","nodata":"","dot":false,"ymin":"","ymax":"","removeOlder":1,"removeOlderPoints":"","removeOlderUnit":"604800","cutout":0,"useOneColor":false,"colors":["#1f77b4","#aec7e8","#ff7f0e","#2ca02c","#98df8a","#d62728","#ff9896","#9467bd","#c5b0d5"],"useOldStyle":false,"x":1110,"y":720,"wires":[["8078de54.ee1c5"],[]]},{"id":"8078de54.ee1c5","type":"file","z":"38ad6844.92e088","name":"ram_free","filename":"/home/pi/.node-red/datalog/ram_free","appendNewline":true,"createDir":true,"overwriteFile":"true","x":1320,"y":720,"wires":[]},{"id":"6977e773.ba2438","type":"ui_group","z":"","name":"system","tab":"5f054619.865f78","order":1,"disp":true,"width":"7","collapse":false},{"id":"5f054619.865f78","type":"ui_tab","z":"","name":"SYSTEM","icon":"fa-cog","order":10}]



Old data will be dropped if it is outside the twindow you have set (time or points). Otherwise it should just carry on



That’s cool unexpected insight! Thank you!
Is there a way to “scroll” the chart for the data “older than twindow”?



If you wanted to make it on demand or store a lot of data will consider the use of some data base, there are several well integrated on node red.