Multiple node-reds with shared files

I too have been running a node-red cluster with docker service swarm for a few months and have solved a few issues. The basic knowledge needed in docker, wireguard, shared filesystem (im using glusterfs), and nginx has kept me from even attempting a howto write up.

However you sound like you have the basics down so I will share some of the solutions I came up with.

Issues:
Storage for any additions. I found issues when using npm packages or user contrib nodes stored to a shared file system.

Binding storage of /data folder for docker but not all the others!

Deploying new flow file to all docker services after I make changes to it.

Solution:
Every time I need a npm package for all the instances I spin down all my docker instances to 1 and then add the stuff to the docker image. Then I create a new image from this one and then spin it back up.

Cool trick not allot of people ever have to know about docker is I to keep sub folders inside /data from being mounted
--mount type=volume,destination=/data/.npm
--mount type=volume,destination=/data/lib
--mount type=volume,destination=/data/node_modules
this keeps them containerized and does not let them be mounted!

Also by using $ docker service update I can refresh all my instances to use the edited flow or if I add new nodes red user contribs / npm file.

the best part of using docker service update is that only one of my instances of node-red in the swarm is down at one time. by using --update-delay 10s \ when creating my docker service of node-red It will wait 10s between updating each instance to the new flow file.

Here are my notes: NOTICE to all I am unwilling to hold your hand on this. These are my scatter brained notes I keep in a folder so 3 years later I can read them and know where/what I did. They are far from a howto and meant for myself. If you don't understand then you need further remedial education....rather .....start reading. If you have easy or general questions Id be happy to help. If you don't understand this and don't have the want to learn on your own I recommend paying for flowfuse.com

####################################
### NODE RED workers ###############
####################################
# THIS IS ALL A PAIN BECAUSE OF REPLICATION AND npm adding nodes!

# create directory to share between workers and manager for this worker
$ mkdir -p $HOME/gluster-mount/docker/cluster/node-red-share/workers

# create directory for instance
$ mkdir -p $HOME/gluster-mount/docker/cluster/node-red-workers/data
note: to remove dir $ sudo rm -r $HOME/gluster-mount/docker/cluster/node-red-workers

# start docker instance
docker service create \
--replicas 1 \
--replicas-max-per-node 1 \
--update-delay 10s \
--update-failure-action pause \
--update-parallelism 1 \
--rollback-delay 10s \
--rollback-failure-action pause \
--rollback-parallelism 1 \
--log-opt max-size=5m \
--log-opt max-file=3 \
--network internalnetwork \
--name "noderedworkers" \
--publish published=xxxx,target=xxxx,protocol=tcp \
--mount type=bind,source="$HOME/gluster-mount/docker/cluster/node-red-workers/data",destination=/data \
nodered/node-red:3.1.3-18

# check that above worked by inspecting logs
$ docker service ls
$ docker service logs -f noderedworkers

# shutdown noderedworkers (from docker manager leider)
$ docker service update --replicas 0 noderedworkers

# edit config file
$ nano $HOME/gluster-mount/docker/cluster/node-red-workers/data/settings.js

#now that we have a running version of node-red with settings we like lets save some files
Create a flows.json file by adding a node and deploying it.

# copy files to backup/server folder:
settings.js

# remove the service
$ docker service rm noderedworkers

# remove the dir
$ sudo rm -r $HOME/gluster-mount/docker/cluster/node-red-workers

# start docker instance ... make sure to set --constraint to the docer leader!!!!

docker service create \
--replicas 1 \
--replicas-max-per-node 1 \
--update-delay 10s \
--update-failure-action pause \
--update-parallelism 1 \
--rollback-delay 10s \
--rollback-failure-action pause \
--rollback-parallelism 1 \
--log-opt max-size=5m \
--log-opt max-file=3 \
--network internalnetwork \
--name "noderedworkers" \
--publish published=xxxx,target=xxxx,protocol=tcp \
--mount type=bind,source="$HOME/gluster-mount/docker/cluster/node-red-workers/data",destination=/data \
--mount type=volume,destination=/data/.npm \
--mount type=volume,destination=/data/lib \
--mount type=volume,destination=/data/node_modules \
--mount type=bind,source="$HOME/gluster-mount/docker/cluster/node-red-share",destination=/data/glusterfs \
--constraint node.hostname==xxxxxxxxxxxxxx.com \
nodered/node-red:3.1.3-18

#check logs and test that its working by loggin in
$ docker service logs -f noderedworkers


# add nodes needed
# nods to add
node-red-contrib-bcrypt
node-red-contrib-crypto-js-dynamic
node-red-contrib-multiple-queue
node-red-contrib-queue-gate
node-red-contrib-string
node-red-contrib-web-worldmap
node-red-node-email


# check that service container is running on leader!!!!
$ docker ps

# Installing npm stuff for use by function node
$ docker exec -it xxxxxxxxxxxxxx /bin/bash
$ cd /data
$ ls
$ npm install --save https://cdn.sheetjs.com/xlsx-0.20.1/xlsx-0.20.1.tgz

# edit settings.js to use module in function node
nano settings.js
---------------------------------------------------

functionGlobalContext: {
         XLSX:require('xlsx'),
    },
---------------------------------------------------

#example usage inside function node

var XLSX = global.get('XLSX');
msg.payload = XLSX.version;
return msg;




# check images
$ docker images docker images --no-trunc

MAKE DANGE SURE NO ONE HAS GET INTO YOUR REGISTY AND CHANGED THESE FILES!!!!!!!
Better yet never pull from image ... edit current runner then push it as a replacemnt then pull from it.

127.0.0.1:xxxx/nodered/customimage   1.0.2     sha256:00fcccccccccccc62cexxxxxxxd46c1f06xxxxxxxxxxxxxxxxxxx42vvvv   3 days ago    634MB


# remember to remove older images ( make sure to keep the last one or two for a rollback!!!)
$ docker image rm
# remove <none> images
$ docker rmi $(docker images -f "dangling=true" -q)

# commit changes (adding nodes) to a new docker image using the container ID
$ docker container commit -a "xxxxxx" -m "Changed default nodered" 940445496ae2 nodered/customimage:1.0.1


#remove current noderedworkers service
$ docker service rm noderedworkers

#create a registy for local images
docker service create \
--replicas 1 \
--replicas-max-per-node 1 \
--update-delay 10s \
--update-failure-action pause \
--update-parallelism 1 \
--rollback-delay 10s \
--rollback-failure-action pause \
--rollback-parallelism 1 \
--log-opt max-size=5m \
--log-opt max-file=3 \
--name "registry" \
--publish xxxx:5000 \
registry:2

#################
# IMPORTANT     #
#################
Always disable the registry when not in use spining up/down services.
docker service update --replicas 0 registry

# You can now tag and push your image :
$ docker tag nodered-noderedworkers 127.0.0.1:xxxx/nodered/customimage:1.0.1
$ docker push 127.0.0.1:xxxx/nodered/customimage:1.0.1

NOTE: to create services from that image :
$ docker service create --name myservice localhost:xxxx/myimage:version

# start up a docker service using the new image
docker service create \
--replicas 1 \
--replicas-max-per-node 1 \
--update-delay 10s \
--update-failure-action pause \
--update-parallelism 1 \
--rollback-delay 10s \
--rollback-failure-action pause \
--rollback-parallelism 1 \
--log-opt max-size=5m \
--log-opt max-file=3 \
--network internalnetwork \
--hostname="noderedworkers-{{.Task.Slot}}" \
--name "noderedworkers" \
--publish published=xxxx,target=xxxx,protocol=tcp \
--mount type=bind,source="$HOME/gluster-mount/docker/cluster/node-red-workers/data",destination=/data \
--mount type=volume,destination=/data/.npm \
--mount type=volume,destination=/data/lib \
--mount type=volume,destination=/data/node_modules \
--mount type=bind,source="$HOME/gluster-mount/docker/cluster/node-red-share",destination=/data/glusterfs \
--env NODE_OPTIONS="--max_old_space_size=2048" \
127.0.0.1:xxxx/nodered/customimage:1.0.1

#check logs and test that its working by loggin in
$ docker service logs -f noderedworkers

# start replicas
$ docker service update --replicas 3 noderedworkers

#check logs again to see if all of them spin up and test that its working by loggin in
$ docker service logs -f noderedworkers

# check that services are running
$ docker service ps noderedworkers

### Note when making changes to the flow in nodered make sure to update all dockers services
$ docker service update --force noderedworkers

1 Like