Hi, I have been working on a project for several months now, and I wanted to deploy Nodered on docker swarm, I already did it for docker compose.
I realised that in docker swarm I needed to share the information for all my node-reds, so I decided to use glusterfs to share the volumes. Now I am encountering several problems. Sometimes I try to log in to nodered but It redirects me again to the login page, as if all my instances dont know that I already gave my password. Also sometimes my nodered just does not start, the logs don't tell me anything even in trace mode. So I think it is just impossible to share the volumes across several nodes for node red. I will try to use local volumes in all the nodes even if I make I have to make all my production changes in all machines. But I am not sure If that would work. I am just hoping for the best.
If someone knows something about this please let me know.
PD. I am using Traefik as a reverse proxy.
I don't know your exact use case, so, maybe, what I am doing does not apply...
I am also running nodered in docker, and I also need redundancy. So, I thought initially going with docker swarm.
In my case, my node red flows (inside docker) are configured using ENV variables. And by changing the variables, I can adapt, for example, which MQTT broker each pair of instances (at the moment I have a bit more than 50 pairs) are connected too.
For each remote MQTT broker, I have two docker containers running in parallel. Almost like swarm would do.
But, instead of using swarm, I couldn't achieve what I wanted, I have developed my own active/backup logic. Using a redis cluster, the containers are storing their status in redis. Then, each container queries the redis cluster to check whether it is active or backup.
And I am also using traefik, with a health check, to get access to the "active" instance of the container.
So, in short, I avoided swarm. No GlusterFS either. The redis cluster is used to store the shared information I need between the various containers.
More information (including the flows) can be found here GitHub - golfvert/WIS2-GlobalBroker-Redundancy
Interested in topic of docker and env files. May I ask how many entries you have in a typical env file? I was thinking about doing something similar, but having too many entries put me off.
At the moment, I have 14 ENV variables. They are technically split in 3 different env files that I push using ansible to the target hosts, where docker is running. Those env files are then "loaded" using docker compose.
Alright, so 5 total in the file, but 3 variations. That's a nice number. I will perhaps end up with 20+ (a lot of devices with access tokens) so consider to just hardcode instead.
Not exactly. All are in files. For ease of management purposes they are, logically, grouped.
I have tried to apply, as much as possible https://12factor.net/
So, no hardcoding.
That's a nice list, something I'm used to at least in parts. It fits very well with web server projects, combined with a db etc. But I can't understand how that's remotely possible with what I ended up working with. Node red connected to loads of devices on weird protocols. "Impossible" to simulate. And we use Thingsboard to host our services. It's only one expensive instance, an additional dev instance also costs a ridiculous amount of money. Sorry for derailing further, but just have to ask, how to deal with a situation like that?
Can't even work on node red locally because a) connections interfere with production and b) all inputs fail.
I don't know...
My understanding of the 12 factor app is a set of good practices. If you can't follow them, for good reasons, as it seems to be your case, then upgrades/changes/... is likely to be riskier.
Not having a dev instance is scary. Let's go back to the OP request !
Well, I ended up using local volumes in each node and use gluster only for config files.