Hi Jbudd,
This is how I would change your flow for my situation, I hope I didn't upset you by trying chatgpt as I am just trying to learn how it works
Thanks
Hi Jbudd,
This is how I would change your flow for my situation, I hope I didn't upset you by trying chatgpt as I am just trying to learn how it works
Thanks
Hi Jbudd,
I changed the flow to this and it sends me an email telling me how many files in the backup but the backup does not appear in the backup directory
Thanks
Looking at your screen capture, I think you have misunderstood the {{{ ... }}} bits of my template.
The template in the flow I posted contains a Bash script, but it also uses Node-red's "Mustache" syntax to insert msg.nodereddirectory and msg.backupfilename into the script.
# To override defaults pass these in as msg properties
NODEREDDIR={{{nodereddirectory}}}
BACKUPTO={{{backupfilename}}}
.node-red then the first line of the script becomes NODEREDDIR=.node-red. If it does not exist, or is blank, the line becomes NODEREDDIR=, also valid Bash syntax (to set the variable $NODEREDDIR to an empty string).That's it for Node-red jiggery-pokery. Everything else in the template is pure Bash syntax.
# Default location
if [ -z $NODEREDDIR ]
then
NODEREDDIR=.node-red
fi
.node-redTaking a look at my inject node:
.node-red, which happens also to be the default in the script."node-red-" & $moment().format("YYYY-MM-DD") & ".tar.gz" to obtain the current date and wrap it between node-red- and .tar.gzYou could connect a debug node (set to show the full message) to the outputs of the inject and template nodes to see more clearly what's going on.
I'm happy to help you further to adapt my flow to your needs but I need to see any changes you have made as code I can import, not as a screen capture.
Hi Jbudd,
This is how I changed the template:
# To override defaults pass these in as msg properties NB Bash does not see these mustaches
NODEREDDIR={{{/home/pi/.node-red}}}
BACKUPTO={{{/dev/sda1/node-red-backups}}}
# Default location
if [ -z $NODEREDDIR ]
then
NODEREDDIR=.node-red
fi
# Default backup file
if [ -z $BACKUPTO ]
then
BACKUPTO=nodered.tar.gz
fi
# Don't overwrite backupfile
#if [ -s "$BACKUPTO" ]
#then
# echo "Error: $BACKUPTO already exists" >&2
# exit 1
#fi
ARCHIVER="tar -czf $BACKUPTO --numeric-owner --exclude=node_modules*"
$($ARCHIVER $NODEREDDIR) # Do the backup
COUNTFILES="$(tar -tvf $BACKUPTO | wc -l)"
FILESIZE="$(du -h $BACKUPTO | sed -e 's/\s.*//')"
printf "%s files %s in %s" $COUNTFILES $FILESIZE $BACKUPTO
Thanks
I refer you to this line of my last post
This is making an assumption about Node-RED's current working directory. I would not recommend that.
TO find out what the cwd is for your instance of node-red, run an exec node with one of the following:
Linux : Use the pwd command in the terminal to display the full path of the current working directory. This command shows the absolute path relative to the root directory.
Windows (CMD) : Use the cd command without any arguments to print the full path of the current directory. Alternatively, use the echo %CD% command to display the current directory path stored in the CD environment variable.
Windows (PowerShell) : Use the Get-Location command to retrieve the current directory path. You can also use (Get-Location).Path for the same result.
Well that's true.
It's almost certainly possible to run Node-red on Linux in such a way that the Node-red directory is not called .node-red, nor indeed need it be in the current directory.
The script could check the location of the flows file and strip off the actual filename
NODEREDDIR=$(journalctl --unit=nodered --grep="Flows file" --merge --lines=1 --output=cat | sed 's/.: //; s/[^/]$//')
This does make the assumption that Node-red is being run by a systemd service called nodered.service and that the directory containing the flows file is also the directory where Node-red is installed.
It might result in the tar archive having absolute pathnames too, which is generally a bad thing.
And bearing in mind that in 90% of installations on Linux, Node-red is installed in the directory ./.node-red relative to the working directory, I think mine is a reasonable default.
I did allow for different setups though by passing msg.nodereddir from the inject node.
Hi Jbudd,
I think what you are saying is that your flow is for a default location for node red which I am sure mine is, if I can get it working it would be great, I am going to delete the first one and import your flow again as i messed it up and hopefully go from there
Thanks
The complexity of my latest post was really addressed to @TotallyInformation who is not a beginner.
You can safely ignore that post (or digest it, entirely up to you) ![]()
Hi Jbudd,
I have re-imported your flow and are trying to work out how it works, if I click on it I get a debug message telling me how many files it has backed up, it creates the archive file in /Home/Pi but I have no idea why it is backed up to that location as I can't see Home/Pi in any of the nodes, all I can work out is that the inject node says Node Red uses this directory and what the backup file will be called
Thanks
Sorry to be pedantic but Linux cares about case. It's almost certainly /home, though your username might be pi or Pi
Node-red generally runs in the user's home directory. For me that's /home/pi.
The script, by default, creates the backup file as node-red.tar.gz in that directory.
# Default backup file
if [ -z $BACKUPTO ]
then
BACKUPTO=nodered.tar.gz
fi
If the inject node includes msg.backupfilename it wil override this default. The example flow I posted specifies node-red-2026-01-21.tar.gz
You can backup somewhere else by injecting msg.backupfilename with an absolute path, eg
"/mount/usb/backups/node-red-" & $moment().format("YYYY-MM-DD") & ".tar.gz"
(NB I have no idea if /mount/usb/backups is the correct directory of your USB mount. Also I would make the script verify that the USB stick is mounted on the directory prior to writing to it)
None of mine are. ![]()
This might be a fine default for many, but I wouldn't want people to get confused if it isn't.
For this and various other reasons, this is why I take a more node.js standard installation route. Installing Node-RED to a clean folder and making the userDir the data folder within that parent. That way, my backup *.sh files sit in the parent folder and all relative folders are always known. I have daily, weekly and monthly backup scripts.
Here is an example of a daily backup script:
#! /usr/bin/env bash
# Redirect stdout to syslog -
# show with: `sudo journalctl -t nrmain-backup`
# or `sudo cat /var/log/syslog | grep nrmain-backup`
exec 1> >(logger -t nrmain-backup -p local0.info)
# redirect stderr to syslog
exec 2> >(logger -t nrmain-backup -p local0.err)
# --- SET THESE TO THE CORRECT LOCATIONS --- #
NR_SERVICE_NAME=nrmain
NR_SOURCE_PATH=/home/home/nrmain
NR_DEST_PATH=/home/home/nrmain-backup
# ------------------------------------------ #
STARTDATE=$(date +'%Y-%m-%d %T')
echo " "
echo "Starting daily backup of $NR_SOURCE_PATH/ to $NR_DEST_PATH/ ..."
echo "Rotating snapshots ..."
# Delete oldest daily backup
if [ -d $NR_DEST_PATH/daily.7 ] ; then
echo " Deleting oldest daily backup $NR_DEST_PATH/daily.7"
# The slow but understandable way:
#rm -rf $NR_DEST_PATH/daily.7
# The faster way (needs an empty folder)
rsync -rd --delete $NR_DEST_PATH/empty/ $NR_DEST_PATH/daily.7/
fi
# Shift all other daily backups ahead one day
for OLD in 6 5 4 3 2 1 ; do
if [ -d $NR_DEST_PATH/daily.$OLD ] ; then
NEW=$(($OLD+1))
echo " Moving $NR_DEST_PATH/daily.$OLD to $NR_DEST_PATH/daily.$NEW"
# Backup last date
# ISSUE: touch does not support options on synology (busybox) system
touch $NR_DEST_PATH/.dtimestamp -r $NR_DEST_PATH/daily.$OLD
mv $NR_DEST_PATH/daily.$OLD $NR_DEST_PATH/daily.$NEW
# Restore timestamp
touch $NR_DEST_PATH/daily.$NEW -r $NR_DEST_PATH/.dtimestamp
fi
done
# Copy hardlinked snapshot of level 0 to level 1 (before updating 0 via rsync)
if [ -d $NR_DEST_PATH/daily.0 ] ; then
echo " Copying hardlinks from $NR_DEST_PATH/daily.0 to $NR_DEST_PATH/daily.1"
cp -al $NR_DEST_PATH/daily.0 $NR_DEST_PATH/daily.1
fi
echo "Finished rotating snapshots ..."
if ! [ -d $NR_DEST_PATH/daily.0 ] ; then
mkdir -p $NR_DEST_PATH/daily.0
fi
# Set today's date on the current backup folder
touch $NR_DEST_PATH/daily.0
ENDDATE=$(date --iso-8601=s)
# Back up
echo "Performing rsync backup ..."
rsync --archive --hard-links --delete --delete-excluded \
--exclude 'node_modules' --exclude 'data/node_modules' --exclude 'data/externalModules/node_modules' \
$NR_SOURCE_PATH/ $NR_DEST_PATH/daily.0
# Validate return code
# 0 = no error,
# 24 is fine, happens when files are being touched during sync (logs etc)
# all other codes are fatal -- see man (1) rsync
# You can output the result to MQTT or to a Node-RED http-in endpoint
if ! [ $? = 24 -o $? = 0 ] ; then
echo "Fatal: Node-RED daily backup finished with errors!"
#curl --insecure -I 'https://localhost:1880/nrnotify?type=backup&schedule=daily&result=fail'
mosquitto_pub -r -t services/$NR_SERVICE_NAME/backup/daily/fail -m $ENDDATE
else
echo "Finished Node-RED daily backup, no errors."
#curl --insecure -I 'https://localhost:1880/nrnotify?type=backup&schedule=daily&result=success'
mosquitto_pub -r -t services/$NR_SERVICE_NAME/backup/daily/success -m $ENDDATE
fi
# Sync disks to make sure data is written to disk
sync
#EOF
You will note that it even logs output to syslog. It uses rsync to do the copies efficiently. It rotates the daily backups correctly. 7 dailies, 4 weeklies and 12 monthlies. With linked unchanged files/folders to reduce storage space.
And finally, it publishes to MQTT on finishing.
Oh, and the daily/weekly/monthly backup scripts are run by CRON for robustness. Since we are backing up Node-RED, having the backup IN Node-RED seems perhaps problematic.
Here is my backup script ![]()
git add . && git commit -m "Backup $(date +'%Y-%m-%d')" && git push origin main
Hi guys,
Thank you for all the suggestions, the last thing I want to do is cause members to compete and fall out, I am 65 years old and still trying to learn Linux, I have noticed that I work for a registered children's charity retreat centre and I use the raspberry pi to constantly monitor hot and cold water storage, I thought it would be a good idea to back the files up, as if I had to start again it would be a big job and inconvenient, I am sure I will work it out in the end.
Thanks
I have been using Unix for 50 years and I'm still trying to learn it ![]()
One Linux lesson is there are many ways to achieve a given result.
I vaguely remember a competition to find all the different command line ways to print the contents of a file on the screen. I think there were about 30.
Nobody's falling out here, we have just picked different ways of doing the backup and are each explaining our approach.
Actually it's good to see how others do things because you can get too focussed on your own approach.
Also a useful reminder to follow your own advice and backup your devices safely (a power cut killed two of my Raspberry Pi SD cards yesterday)
This is why, after exhaustively rebuilding systems each time the RPiOS version changed, I developed my approach to defining Bash scripts within Node-red. No need to rebuild /local/bin as well as Node-red. I do still have some external dependencies unfortunately.
I realised that I hadn't actually explained how I really do backup/restore using my methods.
I can go back 7 individual days, 4 weeks or 12 months. Restoring from backup would simply be an RSYNC command and then npm install. I backup to the same server but the backups are further backed up to a NAS using the NAS's tools. The NAS is also backed up to pre-encrypted cloud storage.
I've never had to restore from backup though, not in over a decade of use and several major reworks of my "live" Node-RED home automation instance. What I have done - several times - is copy a live instance forward to a new device.