Backing up and restoring Global (and Flow) variables

Hi,

I started tonight with the idea to back-up my global variables to a file, something that I can then restore on the unfortunate outcome of a critical failure of my system. I take regular backups of my flows, but not of the variables stored in them.

This was unfortunately driven by a critical failure of my system that occurred last Sunday!

I appreciate that most people don't need to back-up global variables, but my solution allows for user input via UIs to change global and flow variables and I need a way to retain those user changes. Put another way, it would be quite unpleasant, time consuming and impractical to have to re-instate the user changes upon a restore.

I am happy to tackle this any-which-way i.e. maybe there is a better way to do it, however I started tonight with the following code to back-up all my globals to a file.

[
    {
        "id": "0e24839b54f8e811",
        "type": "function",
        "z": "530240e994fd158a",
        "name": "function 18",
        "func": "// Function to retrieve all global variables and store them in an array\nfunction getAllGlobalVariables() {\n    let globalVariables = global.keys(); \n    let globalValues = []; \n\n    // Loop through each global variable name\n    globalVariables.forEach(function(variableName) {\n        let variableValue = global.get(variableName); \n        globalValues.push({ name: variableName, value: variableValue }); \n    });\n\n    return globalValues; \n}\n\nlet allGlobalVars = getAllGlobalVariables();\nmsg.payload = allGlobalVars\nmsg.fileName = \"/home/user/backups/global.json\"\nreturn msg",
        "outputs": 1,
        "noerr": 0,
        "initialize": "",
        "finalize": "",
        "libs": [],
        "x": 1790,
        "y": 1000,
        "wires": [
            [
                "b9ef31998bfd7f28"
            ]
        ]
    },
    {
        "id": "b01a9377ca50517c",
        "type": "inject",
        "z": "530240e994fd158a",
        "name": "",
        "props": [
            {
                "p": "payload"
            },
            {
                "p": "topic",
                "vt": "str"
            }
        ],
        "repeat": "",
        "crontab": "",
        "once": false,
        "onceDelay": 0.1,
        "topic": "",
        "payload": "",
        "payloadType": "date",
        "x": 1580,
        "y": 1000,
        "wires": [
            [
                "0e24839b54f8e811"
            ]
        ]
    },
    {
        "id": "b9ef31998bfd7f28",
        "type": "file",
        "z": "530240e994fd158a",
        "name": "",
        "filename": "fileName",
        "filenameType": "msg",
        "appendNewline": true,
        "createDir": false,
        "overwriteFile": "true",
        "x": 1980,
        "y": 1000,
        "wires": [
            []
        ]
    }
]

The next step is to be able to restore the file to a build which would likely have 2x structures:

  1. reinstate entire system from clone disc image - so most, but not necessarily all of the variables would already exist

  2. reinstate just node-red flows to an existing running system - tbh, not sure how this eventuates other than that somehow node-red becomes corrupt, which after 3.5 years of very heavy usage, it has never done.

So, the primary objective is probably 1. above.

Before I go and spend more hours writing the code to restore the variable from the file I created (which btw, I feel would be a daunting task given all of the different structures of global variables that I currently have defined), I was wondering if someone else has tackled this problem already.

The next step in my endeavour would be to iterate over the flow variables and do the same - backup to file and restore on failure.

This is why I stated the following above: "I am happy to tackle this any-which-way i.e. maybe there is a better way to do this.

PS: I wonder if it's easier to just copy the entire node-red context folder at the time of backing up the flows and retain them both as a coupled backup? I guess that would work for a disaster recovery situation i.e. import flows into a new instance of node-red (do any manual config) + copy he backup of the context folder to the same instance... viola?

On that train of thought, if I was daring enough / knew that my current instance of node-red had the same data structure as the current instance of node-red flows, could I just (stop the node-red service) clear the context folder out and copy the backup across ... would it "just" work.

My node-red setup deals with transient states regardless, so it's not an issue that the restored data would not reflect transient values... the solution initialises all transient variables on start-up, or at least that's how it's intended to work :slight_smile:

EDIT: Upon writing this post, I have convinced myself that backing up the context folder is a good things, so have gone and done that "just in case"... looking forward to hearing about whether this was a good idea or not and why.

If you are using persistent context then yes, backup the context folder.

For me this is part of my routine backup, which consists of the whole .node-red folder apart from the node_modules folder, so the flows are also backed up.

2 Likes

Thanks @Colin!

Fortunately I plan to use the above code to build a Web page to view some of my variables in real time. It helps when developing to be able to see variables change in real time.

Before I venture that way, do you know if there is a tool out there to do this already?

It is called the node red dashboard.

1 Like

I have a taste for building Web UIs using VueJS, but thanks, I will consider it.

@flowfuse/node-red-dashboard is based on Vue (and vuetify). The ui-template node permits full Vue components to be written in your flows.

Alternatively, if you are proficient in web development, ui-builder gives you much more freedom.

1 Like

Thanks Steve, Aleks is already a long-time UIBUILDER user. :slight_smile:

I really always wanted to come up with a plugin context handler that would issue a EVENT when things were changed. It shouldn't actually be that hard since Node-RED already allows for pluggable context handlers so all that should be needed would be to add a hook into the set command.

For a lot of Node-RED users, such an event hook would often be enough to not need to use MQTT which is what most of us currently use (sending data both to MQTT and to context variables).

Last time I tried though, my JavaScript fu wasn't quite up to the task and since then I've not really had the time.

1 Like

Thanks all.

I'll probably build it in VueJS direct as it's what I know and can do quickly.

FYI, I went rogue recently and setup nginx to host a Web page plus connected the page to my mqtt server, so the whole things talks direct to node red using the built in mqtt node.

No good reason for doing it other than to prove it can be done.

My main home automation setup is still 100% on uibuilder!

2 Likes

In the end I created a DOS batch script.

Took a while to get it to pick-up hidden files and folders and output in a way that I like i.e. I can see the files being copied and then the DOS window automatically disappears.

I modified the text in this section to my specific config:

REM *************** MAKE CHANGES HERE ***************
REM Set source and destination paths
set source=[drive letter]:\[folder name] 
set destination=[drive letter]:\[folder name] \%timestamp%__[file name]

REM Directly specify folders to exclude
set "exclude_dirs=%source%\node_modules %source%\uibuilder"
REM *************** MAKE CHANGES HERE ***************

The entire script is here:

@echo off
setlocal enabledelayedexpansion

REM Get current date and time
for /f "tokens=2 delims==" %%I in ('wmic os get localdatetime /value') do (
    set datetime=%%I
)

REM Format date and time
set year=%datetime:~0,4%
set month=%datetime:~4,2%
set day=%datetime:~6,2%
set hour=%datetime:~8,2%
set minute=%datetime:~10,2%
set second=%datetime:~12,2%
set timestamp=%year%-%month%-%day%__%hour%-%minute%-%second%

REM *************** MAKE CHANGES HERE ***************
REM Set source and destination paths
set source=[drive letter]:\[folder name] 
set destination=[drive letter]:\[folder name] \%timestamp%__[file name]

REM Directly specify folders to exclude
set "exclude_dirs=%source%\node_modules uibuilder"
REM *************** MAKE CHANGES HERE ***************

REM Create the destination directory if it does not exist
if not exist "%destination%" mkdir "%destination%"

REM Copy all files and folders, including hidden ones, excluding specified folders from the root
robocopy "%source%" "%destination%" /E /XD %exclude_dirs% /XJ /MT:8 /A-:H

REM Display completion message
echo Copy operation completed.

REM pause
REM exit /b 0

I then did the following:

  • created a .bat file in Windows
  • pasted the script above into it
  • updated the source and destination folders + file name I wanted
  • updated any folders to exclude from the backup. Currently I have listed "node_modulles" and "uibuilder" in the list, but I can add as many as I want and remove some if I also want to.

Important Note:

  1. The folders to remove are specifically only in the root of the source folder being copied. It took quite some time to get this part to work. Subfolders with the specified folder names to exclude WILL be copied across.Different functionality may be required by different people in which case the code would need to be modified and re-tested.

There may be some edge case that means not all files are copied. I have tested for 10 batch files now backing-up data for:

  • node-red
  • web apps
  • mobile apps
  • zigbee2mqtt

And each time I test the backup it has the same number of files and size as the source.

I share the above on the basis that you are grown-ups and will complete the same tests yourself, as
well as periodically to make sure it works as expected.


The next part requires that I have a network share to my source devices for the scripts to work. I have tested the following on:

  • Pi 32 bit legacy
  • Pi 64 bit current 2024 build, as of writing
  • Debian 11

I puttied directly into the devices and ran:

sudo apt update
sudo apt install samba
sudo cp /etc/samba/smb.conf /etc/samba/smb.conf.bak
sudo nano /etc/samba/smb.conf

Scrolled down to the bottom of the file and added the following lines to create the share:

[share_name]           
path = [folder_path_to_share]
read only = yes
public = no

Important note:

  1. I have modified the above to restrict the share to read-only. In my actual setup it is more permissive, but I prefer to not share that setup as the above adds a layer of protection against modifying the source from Windows.

The next step was to replace share_name with an actual name e.g. my_pi
Replace the folder_path_to_share with the actual path e.g. /home/pi_user
Set a password for the user I was logged in as e.g. pi_user

sudo smbpasswd -a pi_user

Restarted the samba service

sudo systemctl restart smbd

Now in Windows I mapped a network drive using the following drive format

host_name/share_name

Used the username name I was previously logged into and the password I set for the folder share e.g. pi_user/my_pi + [password from above]

I assigned a drive letter and use this as the source of the share in the DOS batch script above.

Important notes:

  1. Don't overshare folders from Pi or Linux setups. I have setup the above samba shares to share from the exact parent that contains each of the 10x backups I am running.

  2. If you do find an edge case in which the number of source files does not equal the number of destination files, please share this and the fix you had to implement. I hope there are no more edge cases especially given the testing, but your system setup may differ.

Good luck and be careful. Use the above at your own risk. I will put this into context....

Whilst endeavouring to backup my system I decided to use Clonezilla, which is a great tool BTW, and I read someone complaining that they had used Clonezilla to backup their existing drive to a brand new clean drive. They ended up with two blank drives. Things can go horribly wrong when you backup, so check thrice, backup once and then check the backup worked!

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.