Multiple node-reds with shared files

This might be a strange request. I have built my first cluster. The cluster has shared storage. I started two instances of node-red and I had expected that deploying a change via one instance would show up in the other instance with a browser page refresh. This is not the case.

When I restart the instances the changes then show up in both. Is there a config in settings.js or elsewhere where I can synchronize the two instances via the shared flow file?

Because you deploy to one instance, the other instance will only see the changes if you restart it.

If I'm not mistaken you can't do anything in the settings.

You can use a service that restarts the instance as soon as the file is modified (like pm2 with watch) or I recommend FlowFuse which will offer you this query and more

1 Like

I will dig further but I did not readily see anything in the FlowFuse docs that led me to believe I can make this work using FlowFuse.

I do like your idea of restarting the instances upon file change. I will look into that.

FlowFuse has several ways of achieving this. Here are 2:

  1. You can create a node-red instance that is tightly bound to devices. Any time you make a change to the instance you can create a snapshot that gets auto deployed to your "devices" (in your case, your 2 node-reds)
  2. Pipelines: You can create a pipeline for deploying the latest snapshot (or generate a new one) from 1 instance to another - it will auto deploy.

How this is achieved by FF is we have created something called a "Device Agent". It runs your Node-REDs and communicates with the FF platform. It can be instructed to reload/restart nodered when new snapshots or settings are applied etc.

1 Like

This looks perfect.

Sort of funny, in the last half hour I created a node-red-dev single instance in my cluster alongside my multi-instance node-reds. I then created a job that would shut down the multiple node-red instances, copy the flow file from the node-red-dev, and restart the node-red instances on demand.

I will definitely look at Snapshots and their process though.

So my little experiment is working with one little issue. After I copy the flow file and restart the node-red instances a Review-Changes process is triggered. Is there a way to mitigate this?

No. The flows have changed on the server.

Just refresh the browser.

You can ignore it by reload your browser.

Or you can unsubscribe the event (I don't know if it's very appropriate)

RED.comms.unsubscribe("notification/runtime-deploy")

(this call must be made each time the editor is opened)

EDIT: will not work because have to give the callback reference

I believe you won't be able to deploy any changes due to the flows revision differing.

I don't believe this is a wise suggestion.

Refreshing the browser and even closing the tab and reopening the merge request popoup still comes up.

Are you sure you are not continually writing to the flows file?

Yes, it is in a different space. The flow file does not change until I do the file copy at the operating system level.

I will do some more explicit testing, but that as what I am experiencing

Check the timestamps to make sure.

Are you using the latest version of node red?

Sorry for the late response. I have been away for a couple of days.

TImestamps are different until copied as expected.

It all seems to work as expected now. I may have had something incorrect in the Nomad definition.

But as a practice, it works very nicely. I have 3 instances of node-red running in a cluster with a single vIP address.

I too have been running a node-red cluster with docker service swarm for a few months and have solved a few issues. The basic knowledge needed in docker, wireguard, shared filesystem (im using glusterfs), and nginx has kept me from even attempting a howto write up.

However you sound like you have the basics down so I will share some of the solutions I came up with.

Issues:
Storage for any additions. I found issues when using npm packages or user contrib nodes stored to a shared file system.

Binding storage of /data folder for docker but not all the others!

Deploying new flow file to all docker services after I make changes to it.

Solution:
Every time I need a npm package for all the instances I spin down all my docker instances to 1 and then add the stuff to the docker image. Then I create a new image from this one and then spin it back up.

Cool trick not allot of people ever have to know about docker is I to keep sub folders inside /data from being mounted
--mount type=volume,destination=/data/.npm
--mount type=volume,destination=/data/lib
--mount type=volume,destination=/data/node_modules
this keeps them containerized and does not let them be mounted!

Also by using $ docker service update I can refresh all my instances to use the edited flow or if I add new nodes red user contribs / npm file.

the best part of using docker service update is that only one of my instances of node-red in the swarm is down at one time. by using --update-delay 10s \ when creating my docker service of node-red It will wait 10s between updating each instance to the new flow file.

Here are my notes: NOTICE to all I am unwilling to hold your hand on this. These are my scatter brained notes I keep in a folder so 3 years later I can read them and know where/what I did. They are far from a howto and meant for myself. If you don't understand then you need further remedial education....rather .....start reading. If you have easy or general questions Id be happy to help. If you don't understand this and don't have the want to learn on your own I recommend paying for flowfuse.com

####################################
### NODE RED workers ###############
####################################
# THIS IS ALL A PAIN BECAUSE OF REPLICATION AND npm adding nodes!

# create directory to share between workers and manager for this worker
$ mkdir -p $HOME/gluster-mount/docker/cluster/node-red-share/workers

# create directory for instance
$ mkdir -p $HOME/gluster-mount/docker/cluster/node-red-workers/data
note: to remove dir $ sudo rm -r $HOME/gluster-mount/docker/cluster/node-red-workers

# start docker instance
docker service create \
--replicas 1 \
--replicas-max-per-node 1 \
--update-delay 10s \
--update-failure-action pause \
--update-parallelism 1 \
--rollback-delay 10s \
--rollback-failure-action pause \
--rollback-parallelism 1 \
--log-opt max-size=5m \
--log-opt max-file=3 \
--network internalnetwork \
--name "noderedworkers" \
--publish published=xxxx,target=xxxx,protocol=tcp \
--mount type=bind,source="$HOME/gluster-mount/docker/cluster/node-red-workers/data",destination=/data \
nodered/node-red:3.1.3-18

# check that above worked by inspecting logs
$ docker service ls
$ docker service logs -f noderedworkers

# shutdown noderedworkers (from docker manager leider)
$ docker service update --replicas 0 noderedworkers

# edit config file
$ nano $HOME/gluster-mount/docker/cluster/node-red-workers/data/settings.js

#now that we have a running version of node-red with settings we like lets save some files
Create a flows.json file by adding a node and deploying it.

# copy files to backup/server folder:
settings.js

# remove the service
$ docker service rm noderedworkers

# remove the dir
$ sudo rm -r $HOME/gluster-mount/docker/cluster/node-red-workers

# start docker instance ... make sure to set --constraint to the docer leader!!!!

docker service create \
--replicas 1 \
--replicas-max-per-node 1 \
--update-delay 10s \
--update-failure-action pause \
--update-parallelism 1 \
--rollback-delay 10s \
--rollback-failure-action pause \
--rollback-parallelism 1 \
--log-opt max-size=5m \
--log-opt max-file=3 \
--network internalnetwork \
--name "noderedworkers" \
--publish published=xxxx,target=xxxx,protocol=tcp \
--mount type=bind,source="$HOME/gluster-mount/docker/cluster/node-red-workers/data",destination=/data \
--mount type=volume,destination=/data/.npm \
--mount type=volume,destination=/data/lib \
--mount type=volume,destination=/data/node_modules \
--mount type=bind,source="$HOME/gluster-mount/docker/cluster/node-red-share",destination=/data/glusterfs \
--constraint node.hostname==xxxxxxxxxxxxxx.com \
nodered/node-red:3.1.3-18

#check logs and test that its working by loggin in
$ docker service logs -f noderedworkers


# add nodes needed
# nods to add
node-red-contrib-bcrypt
node-red-contrib-crypto-js-dynamic
node-red-contrib-multiple-queue
node-red-contrib-queue-gate
node-red-contrib-string
node-red-contrib-web-worldmap
node-red-node-email


# check that service container is running on leader!!!!
$ docker ps

# Installing npm stuff for use by function node
$ docker exec -it xxxxxxxxxxxxxx /bin/bash
$ cd /data
$ ls
$ npm install --save https://cdn.sheetjs.com/xlsx-0.20.1/xlsx-0.20.1.tgz

# edit settings.js to use module in function node
nano settings.js
---------------------------------------------------

functionGlobalContext: {
         XLSX:require('xlsx'),
    },
---------------------------------------------------

#example usage inside function node

var XLSX = global.get('XLSX');
msg.payload = XLSX.version;
return msg;




# check images
$ docker images docker images --no-trunc

MAKE DANGE SURE NO ONE HAS GET INTO YOUR REGISTY AND CHANGED THESE FILES!!!!!!!
Better yet never pull from image ... edit current runner then push it as a replacemnt then pull from it.

127.0.0.1:xxxx/nodered/customimage   1.0.2     sha256:00fcccccccccccc62cexxxxxxxd46c1f06xxxxxxxxxxxxxxxxxxx42vvvv   3 days ago    634MB


# remember to remove older images ( make sure to keep the last one or two for a rollback!!!)
$ docker image rm
# remove <none> images
$ docker rmi $(docker images -f "dangling=true" -q)

# commit changes (adding nodes) to a new docker image using the container ID
$ docker container commit -a "xxxxxx" -m "Changed default nodered" 940445496ae2 nodered/customimage:1.0.1


#remove current noderedworkers service
$ docker service rm noderedworkers

#create a registy for local images
docker service create \
--replicas 1 \
--replicas-max-per-node 1 \
--update-delay 10s \
--update-failure-action pause \
--update-parallelism 1 \
--rollback-delay 10s \
--rollback-failure-action pause \
--rollback-parallelism 1 \
--log-opt max-size=5m \
--log-opt max-file=3 \
--name "registry" \
--publish xxxx:5000 \
registry:2

#################
# IMPORTANT     #
#################
Always disable the registry when not in use spining up/down services.
docker service update --replicas 0 registry

# You can now tag and push your image :
$ docker tag nodered-noderedworkers 127.0.0.1:xxxx/nodered/customimage:1.0.1
$ docker push 127.0.0.1:xxxx/nodered/customimage:1.0.1

NOTE: to create services from that image :
$ docker service create --name myservice localhost:xxxx/myimage:version

# start up a docker service using the new image
docker service create \
--replicas 1 \
--replicas-max-per-node 1 \
--update-delay 10s \
--update-failure-action pause \
--update-parallelism 1 \
--rollback-delay 10s \
--rollback-failure-action pause \
--rollback-parallelism 1 \
--log-opt max-size=5m \
--log-opt max-file=3 \
--network internalnetwork \
--hostname="noderedworkers-{{.Task.Slot}}" \
--name "noderedworkers" \
--publish published=xxxx,target=xxxx,protocol=tcp \
--mount type=bind,source="$HOME/gluster-mount/docker/cluster/node-red-workers/data",destination=/data \
--mount type=volume,destination=/data/.npm \
--mount type=volume,destination=/data/lib \
--mount type=volume,destination=/data/node_modules \
--mount type=bind,source="$HOME/gluster-mount/docker/cluster/node-red-share",destination=/data/glusterfs \
--env NODE_OPTIONS="--max_old_space_size=2048" \
127.0.0.1:xxxx/nodered/customimage:1.0.1

#check logs and test that its working by loggin in
$ docker service logs -f noderedworkers

# start replicas
$ docker service update --replicas 3 noderedworkers

#check logs again to see if all of them spin up and test that its working by loggin in
$ docker service logs -f noderedworkers

# check that services are running
$ docker service ps noderedworkers

### Note when making changes to the flow in nodered make sure to update all dockers services
$ docker service update --force noderedworkers

1 Like

I was looking real hard at using Swarm. And then I was told by another developer that:

When Mirabilis bought Docker, they removed all the developers from that work. There is only one actual dev doing any contributions for Swarm any more. It is effectively a dead/abandoned product

Another possibility is to use the /admin/flows apis to "push" new flow.json files to the remote containers from a central node-red server (or even ansible scripting via curl). This would keep the containers from restarting (i'm assuming node-red can restart itself without killing the docker process).

Of course, this requires that each docker instance is read-only, since any changes are saved inside the container. Not sure if that fits with what you are trying to do...

Or what about have a CMD in the dockerfile that pulls the latest version of flows.json from a central git repo before it starts running? Also, I've never tried using the "Projects" feature from a docker container, but it seems like it would have the advantage of getting the latest flows by simply deleting/recreating the containers.

Sure they killed the teams managing the service product. docker-ce swarm still is a supported feature Most people follow the trends and many have moved away from docker for its cluster uses.
If you are going to be managing a large cluster (think banks) kubernetes is the way to go. If your 1 guy trying to do something fun and want to keep it easy Docker Swarm it up!

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.