I did a little searching and didn't see anything, so I'll post a q.
Is it possible to upload a new flow and deploy it via the command line?
I've got a node-red instance running flows on a remote Pi and often the only connection is via ssh.
I've duplicated that environment on my desktop, and I usually design and debug new flows on the desktop. And then migrate them over, when that remote Pi is close and on-network.
I'm wondering if there's a command-line way to "import" a new flow (it's JSON I assume) to the remote instance? Via ssh/scp?
Could I create a new node-red browser tab via the command line? The deploy the new flow, via ssh and restart the whole kaboodle?
Great question - metered connection on the remote Pi. Very slow bitrates, expensive. Running a remote browser renders things unusable. I’ve tried. FWIW, I’ve tried SSH tunnel and RPi’s remote connect service. I’m stuck w/ text and an SSH terminal.
If you're restricted to the command line, it's going to be painful but doable.
one way (for updating the flows at least)
Stop Node RED
Transfer the latest flows file via SCP (to replace the current one) - pay attention to the current name.
Start Node RED
Providing no new modules are required - but that is also quite simple anyway
(and backup )
It's not the greatest answer - but given the limitations..
Other may have magic I am not aware of, of course.
But.... Wait for it....
Have you not considered FlowFuse - seems like a good fit for your use case, and you can run an instance of FlowFuse on your development machine (I think )
(No I am not affiliated with FlowFuse) - but its central management for Node RED on remote devices
I would do as Marcus has said in the first part of that last post. Except that I would do this:
Stop node-red
rename the current flow file
upload the new flow file
restart node-red
If you need to minimise down-time, you can do steps 2 and 3 and simply restart node-red. That is safe as long as you don't have any other automation (or person) that might try to change the flow at the same time.
If I understand correctly, all flows are stored in ~/.node-red/flows.json. They’re not broken out? If so, looking at flows.json, it’s not immediately intuitive how to add a new flow. The ids look problematic to keep straight.
It seems the approach would be:
keep desktop NR in synch w/ remote Pi
export all flows from desktop to flows.json
scp flows.json to remote Pi
stop NR, move flows.json to ~/.node-red/flows.json
restart
Hmmm.
Doable I guess. I was kind’a hoping flows.json would be broken up a bit, rather than be the total collection of all flows.
But.. treat the flows file as one big collection of tabs, and things
What you could do.
Take a copy (from pi)
Import to your dev machine
Change/Deploy on dev machine (so it gets written to file)
Stop Dev (for safety)
Stop Pi (for safety)
Upload flows from dev to pi (scp)
Start pi
it should work - as mentioned by Julian stopping is optional, but thats just me being cautious.
The ID's remain in tact (as long as its a complete replacement of your flows file)
the flows file also contains the flows for each tab - as you guessed - they are not across files.
also be aware of your credentials file - as that may also need to be along with the flows file.
EDIT
You could also look at the Projects Feature - which seems overlooked here.
In essence - your flows are published to a (private) GitHub repo, and pulled down - might be worth looking at
(I use projects - So I have a backup and can restore at a minutes notice)
No need to export. Just copy the ~/.node-red/flows.json from the desktop to the same location on the remote pi and restart node-red. If you are worried about network usage then use rsync and it will only copy changes across.
Do change your settings.js file though so that the main flow file is stored in its "pretty" form. That is much easier for things like rsync or git to parse for differences.
Of course, there is another way to do this, but it is more complex. A running instance of Node-RED has an admin API that can be used to import things. I've never used it myself so I don't know the details but it would be another approach.
That is true for git, but I don't think it makes a difference for rsync, if I remember correctly it works with binary files as well as text files, it uses some sort of 'rolling checksum' to recognise changed sections of the file and send them across. I tried to understand exactly how it works once, but decided it is magic.
OK, I've never tested it to be honest so not sure. There are other advantages to storing the file formatted I think, maybe for those rare occasions you need to hack into it manually. Anyway, I've long kept mine that way by default.