Context transfer

Hello,
so I have this task - after about 8 years of running NR used for aggregating data we finally decided to migrate from v1.0.6 to latest. Ofcourse the system have to run 24/7 without any longer shutdowns and through the years there is huge collection of different services, protocols, etc. So I have installed a new server on which I slowly transfer every functionality. But now I found out that some of function nodes use their context storage as cumulative history (for example there are functions that calculate total produced energy from power).

So my question: is there any easy and straightforward solution to transfer this context data between servers when they are constantly updated?

I found the context files in .nodered folder, so I can copy them. But there will be certain time window when the incoming data will be lost. I was thinking about some context transfer via tcp comm right in nodered, but that would work only for flow and global context, not the node context, unless I rewrite all the functions to restore the data in flow context.

Probably it will be okay if few minutes of data are lost, but it would be better to minimize this time as much as possible.

Looking forward to any suggestions

Hmm, tricky.

If you can stand a short downtime. One possibility would be to set up the new server and set up a ready-to-go script to copy the context files. Then use a Proxy server change - again, scripting would be the quickest approach.

  1. Start the new Node-RED
  2. Close the old Node-RED
  3. Run the copy script
  4. Switch the Proxy from the old to new Node-RED (might need to reload the proxy config or restart it.)

That should generally only take a few seconds to fully complete. I doubt though that even that would absolutely guarantee no data loss at all.

There might be other things that could be done depending on how your incoming data feeds are connected. An obvious improvement would be to ensure everything is coming in via MQTT - then you could potentially cut out the proxy switch so you would only need steps 2 and 3. really.

One other alternative might be to switch to a timeseries DB such as InfluxDB. That way, while you might have to live with some missing stats for a short while, once you have finalised you re-engineered flows so that new data is going straight into the DB, you could then process the previous data into the DB using a separate flow.