Tips For Managing Several Node-Reds?

I have many RPis all running the same Node-Red flow with the same UI.
If a revision or update is needed, it must be implemented for every RPi I have. While it is not too bad for two or three, I am on track to having 12 all running the same flow.

Are there any tips/tricks or tribal knowledge that could help me speed up the flow updates?

Serving all twelve from one instance comes to my mind but I do not believe that would be possible for my situation. Not even sure how it would perform serving 12 separate UIs?

Is there such a way that I could keep a master image of the flow on one RPi and when changes are made, push the image to the other RPis?
Assuming that the flows are stored in a file, could that file be pushed and overwrite the other files on the other RPis?

I think the easiest way would be the following:

  1. Declare one RPi and its NR installation as "master". All changes will be implemented on this NR installation.
  2. Implement a simple flow that monitors changes to the flows.json file on this RPi and then sends the flows.json to the other 11 RPis using the HTTP request node and the NR Admin HTTP API, particularly the POST /flows endpoint. Depending on your exact needs the POST /flow endpoint might be more suitable but would require you to parse and process the flows.json file in order to only extract the flow you want to update.

One downside with this approach is that each change to the flows.json would get deployed to the other RPis instantly which might not be suitable for testing. However you could simply use an inject node rather than a watch node in order to trigger the deployment to the other RPis manually.

It might also be possible to combine this approach with the projects feature of NR in order to implement version control of your flows. Sorry, you will have to google for this because as a new user I cannot make this a link, since I'm only allowed to add two links to a blog post :unamused:

Oh, and one addition to my previous post: if you want to go for a more complex, yet very flexible solution, you can take a look at Balena OS (formerly known as Resin OS). This allows you to very easily deploy Docker containers on devices like RPis and also manage these devices in a web console. Using this approach would mean that you need to run NR in Docker and build a new image for every stable version of your flows.

2 Likes

I would start by using the Projects feature so that the flow is stored in a single git repository. Then on your development system you can push your new version to the repo. Then have a script that uses SSH to run remote commands to fetch the latest version on the remote systems. You can either run that manually after the push our even configure it as a callback script in git that runs it automatically on a push. Possibly asking for confirmation before actually doing it.

I would love to have version control features that NR Projects provides with Github, but I cannot use it behind corporate firewalls. Cloud/Internet access is not an option in other words because of security.

The BalenaOS looks promising but I am unfamiliar with Docker and containers. It seems to provide software virtualization rather than machine virtualization as VMWare or Hyper-V would provide.

Would I be able to push an updated NR "image/container" from a master image? Preferably manually.
What is managed from the web console? Would I be able to manage all of my RPis from the master web console, or would I need to login to each individual RPis web console?

The Projects feature is not tied to GitHub. It can use any git server, whether inside or outside your firewall.

I have a local GitLab instance running on my local network that I routinely use for testing - nothing leaves the subnet.

1 Like

Ah, I see. I was not aware that there was any service/server other than GitHub.
Just doing a quick search, GitHub is just a webservice that offers the Git software created by Linus.

I will take a look at using GitLab, thank you Nick.
This would go back to what Colin had mentioned with using the Projects feature and pushing updates to the repo located on my locally hosted Git server, or rather my master RPi which would also host the Git server.

I believe I have a less than foggy idea on pushing the updates to my Git server and writing scripts to issue remote commands over SSH. That all seems straightforward.

However, what is still foggy for me is how exactly do the updates get deployed on the other RPis in my network?
If a RPi running a Node-Red flow in production state is called to update its flow from a git server, does the flow update just the same as if the Deploy button is clicked?
Does Node-Red need to be restarted?

You don't need to use anything as complex as gitlab, you just need to install git and then you immediately get both client and server capabillity on that machine. Look for a tutorial on getting started with git, it is very easy.

Yes, you will need to restart node-red after updating the flow, I should have said that in my previous post. You could do that as part of the script that fetches the latest version.

Ah I see now. Turns out I already have Git.
I will research more on interfacing with it and try to get NR Projects working with it.

I suppose I could also manage the Git on the master RPi with some GUI software on my Windows PC as well?

I appreciate all your help everyone. Thank you!

You could just do this using SSH and pushing the changed files out using rsync - if it was me doing this i would do the following

  1. Designate master Pi, build, test and deploy flows on there
  2. When flows pass muster and are ready to go, copy the whole file (renaming as appropriate for each remote machine), into a subdirectory on the master
  3. Each remote node runs a rsync cron job that runs every minute and monitors that nodes master directory on the Master Rpi, if there is an update the file is copied over, and the script then performs a restart of Node-red

Craig

1 Like

Yes you could, but you would lose all the advantages of git, tracking of versions, the ability to revert to previous versions if necessary and so on.

Colin,

Yep - although i would be doing archiving on each remote Pi with the rsync script to provide a local rollback mechanism once it was all in and running.

I have not looked into the Git stuff yet with projects

Craig

After some research with git, it seems really complex. I have gone through the git intro and it’s docs and followed a few tutorials on a dev RPi. I still can’t seem to wrap my head around how NR would communicate with my local git server.

Watching the video that Nick released about NR Projects made it seem very simple using GitLab.

While I like the idea of versioning and repositories, I am not going to force it upon myself. Not to just get something working I don’t fully understand just to save some time.

Yep start off with Rsync and SSH and go from there - probably all of 1/2 hours work to set it up if you know anything about Linux

Craig

1 Like

Hi Seth350,
I was facing the same issue. All the solutions I tried, were not so handy, thats why I started learning docker.
And beleve me, it is in combination with azure IoT Edge or balea (formerly resin.io) the Most Handy and stable Solution.
With a single git push master your whole fleet of devices is updated. Besides the other device management metrics functions, that make my life More easy

1 Like

Thank you sir, I appreciate the info. I will give Docker a look-see and do some reading on it.

Suggestion to the Node-Red team? "Deploy To Fleet" button! lol..just kidding..kinda... :wink:

Hmm, if you think Git is complex, wait till you look at Docker!

There are lots of ways to synchronise files between devices, especially if they are on the same network.

As always, a caveat, if your node-red instances have some significant value, you should think about the potential vulnerabilities you may introduce by allowing files to be synchronised between them - but let's not sidetrack this thread with that.

Also, when deciding the best approach, you may need to think about how quickly you need your instances to sync up. Does it matter if different instances are running different versions for a while? If it does, then you need a flow (or separate script) to stop all of the flows until the update is complete.

  1. Git - as already mentioned. Not as hard as it first seems and you don't need to understand all of it. But you do need to turn on the projects feature and likely need some scripting if you want to automate keeping everything in step.
  2. SSH - or more likely RSYNC which can use SSH as its connection. Always test RSYNC settings on something innocuous first though as it has the potential to delete everything. Once set up though, it will be rock solid - probably for years and years.
  3. Docker - Advantage is that this lets you sync both the data and the whole instance if you need it to. But has very significant resource overheads and a rather steep learning curve.
  4. Shared Filing System - Old school approach but still valid. You set up a "shared drive" using NFS or SAMBA and mount this into the same place in each device. Main advantage is that there is no sync required since all instances use the same files. You will still need a way to restart all of your flows of course.
2 Likes

Git just didn't blow my skirt up for what I was looking for. I was hoping for a simple approach so that if a colleague comes behind me to update something, he/she doesn't need to go watch a half dozen YouTube videos and read the Git spec before reading my "How To Update Our EnterPi's" document.

I didn't want to be killing a mosquito with a sledgehammer either. I already have Samba setup and that is how I transfer dev files to the Pis.

From what you are saying, a dir could be mounted to Samba and a script wrote to copy files from the master Pi to all of the other Pis. That seems more like a suitable tool for me.

Next question, what files exactly need to be copied/shared?