I know I have been down this road before, but slightly differently this time.
I have a script to backup node-red stuff locally on the same machine.
As things go on, I am needing to now access a remote machine (probably a NUC, but who knows at this stage) and copy files from the machine to the remote machine.
It will be a USB drive and may not always be there.
I have the share set up on the remote machine. showmount -e <ip address> shows me the share.
So, how I am stuck how to access that share.
Do I mount the share, or can I copy the files to the remote machine another way?
If I have to mount it, it isn't too much of a problem, but it makes it slightly more complicated, but not impossible.
So for now I am wanting to understand the options of how to copy the files (directories?) between machines.
What I do is more of the advanced route, but I have network shares in a SystemD unit file set up for mounting, including setting the permissions right. If needed I can control those through a systemctl command but most I’ve set up to automount once a network connection is present. These run on a pi that is connected over WiFi, so setting up mount points for those in /etc/fstab isn’t the right path as that will run before a network connection is present. So far it’s working good, and I’ve a feeling I should be able to get it working through a flow too, though I’m not there yet (for lack of trying mostly)
For remote backups I generally do it the other way round... and have the central "server" just go out to each client and copy the files back to itself at regular intervals using a simple loop and rsync - if one of the clients isn't there for some reason then it just moves on to the next.
rsync can be used for local backup creation, but the r in rsync stands for remote, usage:
rsync options source destination
There is always security in between, rsync has an environment variable available for supplying passwords (RSYNC_PASSWORD).
To make life easier, use ssh key pairs between your devices so that you don't need to enter a password. (ie. generate key on the source device, copy the key to the remote device). Then you can use rsync over ssh.
no - you can just scp in (or rsync) and copy the files from the remote directory. nothing needs to be mounted. As long as the remote user can a) login to the client computer and b) has permission to read the files in that directory then it should be fine. So in my case I log as "pi" user and use the required password, (or set up certificates - which to me is easier - but not for this thread).
I am adopting your idea that the sever gets the file from the remote machines.
I can SSH to the remote machine/s ok - but I need to enter the master password - to log in.
In the remote machine field.....
This is the path: 192.168.0.93:/home/pi/Backups/NR/2019-09-15.16.59.29/
But do I need it to be pi@192.168.0.93:/home/pi/Backups/NR/2019-09-15.16.59.29/?
Yeah, I'll be trying but just to disclose all things I am doing.
me@me-desktop:~/TEMP/NR$ rsync -avz pi@192.168.0.93:/home/pi/TEMP /home/me/TEMP/NR/
receiving incremental file list
sent 25 bytes received 115 bytes 93.33 bytes/sec
total size is 661 speedup is 4.72
me@me-desktop:~/TEMP/NR$
So, I don't get why it doesn't work.
Ok, the command I posted (as the second (working) one) doesn't have a / at the end of the remote path.
So:
me@me-desktop:~/TEMP/NR$ rsync -avz pi@192.168.0.93:/home/pi/TEMP/ /home/me/TEMP/NR/
receiving incremental file list
./
FF_Update2.txt
hosts
sent 65 bytes received 550 bytes 246.00 bytes/sec
total size is 661 speedup is 1.07
me@me-desktop:~/TEMP/NR$
Putting aside the node now - though yes I brought it up:
What is with the ssh in the command?
If I do it that way, it fails. I posted what I used and it works. That I can work out the real command which works worries me. I'm not supposed to be that smart.
And
But this is what was given to me as working: rsync -avz ssh pi@192.168.0.100:/home/pi/.node-red mylocalBackupDir1/
With no leading / on the local path.