I nicked a flow from somewhere, every 3 hours it zips my Node-RED flows and saves to a unique filename into my Dropbox.
Interesting facts:
Now my Dropbox holds over 7300 zip files, dating back to Jan 2019
Backup file size started at 93KB, now it's 493KB, total size of all backups 1.8GB
The flow has not once ever become disconnected or required re-authentication, it really has been "set and forget"
I've never needed it, but nice knowing I can restore to pretty much any point in time I guess.
BUT - I feel it can be improved on.
I'd love to have some kind of "autosave":
prevent the flow from triggering when no changes have been made (thus saving dropbox space. Also means each backup is more meaningful in terms of using for version control)
somehow trigger more often the more changes that are made within a period of time
Here are possible ways I can see of achieving the above:
Write a second flow that runs a diff on the zip files to essentially remove duplicates. It will save disk space but when there's a flurry of changes it won't increase the frequency, which would be a nice feature
Increase the inject node trigger to every 1 minute then monitor the flows.json file for changes. If any change was made, run the backup. Potentially decrease the frequency of checking if this generates too many files still.
Any other bright ideas? I think (2) is the best, but what is the best way to "monitor" the flows.json file for changes? Do I need to load the file into Node-RED and then run the comparison in a function node or something like that?
Probably not what you want to hear but there are tools that will do efficient backups for you. I use scripts with RSYNC and links to get efficient space use of backups. My local backups are backed up to a NAS which, in turn backs up to the cloud.
I also run virtual server backups (Veeam and Hyper-V), this is good for taking a snapshot of the entire Node-RED server. But not so great for stuff like version control and quickly dipping in to backups to see what changed.
No doubt there's also a better means of version control e.g. using GitHub but I like to keep things very simple and sometimes find dedicated version control applications a bit too complicated.
I never do those for personal servers since I find that they are hardly ever useful. A restore typically only needs a small % of the backup and it is almost never useful to restore a complete server since I will certainly want to build a replacement to the latest spec rather than restoring a spec that will be years old.
So I back up key settings for the OS and then back up Node-RED as a complete installation (I install NR locally rather than globally) with the userDir as a sub-folder. So everything is in one place. Very helpful for major upgrades since I can restore back after an upgrade to the last day's backup. As well as a week of daily backups, I keep a month of weekly and a year of monthly backups.
Generally agree, it's a bit clunky. I chose that option because I like to run stuff on very powerful but old Dell servers (e.g. R720 and similar), the price of these things is dirt cheap but they are of course without warranty. I literally have a spare / swapout server if one goes wrong, and I do like to know there's basically only about 20 mins of work to get up and running again. Also my router is a virtual machine (running Untangle) hence needing to make sure I can get about 5 Win+nix servers up and running very quickly when I need to. Yes - definitely prefer to rebuild to latest versions, but the backups are for when I don't have time to do that but need stuff working again really quickly...
I do like to hear how different people approach this.
I do like to hear how different people approach this.
A backup of the ".node-red" dir every day, zipped, remove > 7 days.
Do you need a backup every 3 hours ? Do you make that many changes to your flows that you no longer will be able to recover after 3 hours ?
If you want every change to be stored, you are also storing potential problems.
Perhaps give it some thought what you actually want to achieve. If it is just to make sure you have everything your are doing in 2-fold, rsync is probably the easiest and most efficient way.
In my original post - node-RED Backup Flow the flow I posted would create a backup each night, but because of the format of the backup name, Dropbox will keep the previous 30 days backups that can be restored.
Dropbox weeds backups older than 30 days.
I use Pauls Backup to get a backup from within NR each night of each of my Servers.
I also run this Linux utility from the command line every hour against the NR directory that will pick up any changed files and copy them across to my Google Drive folder - this is just a wrapper shell around Linux Rsync that handles naming and deleting etc
As well as VM Snapshots for total machine recovery and/or rollback