I need to run multiple node-red instances and should be able to load-balance the flows between these nodes . Is there any built in support for the same ??
that is for balancing nodes within a flow . What I would like is to distribute the flows within various instances of running node Reds.
You could use that to trigger actions in flows on devices in your cluster. For example send the actions to mqtt topics that the individual devices pick up, or to the same topic on mqtt servers on each device.
If the N node-red instances are each running on its own server (e.g. s1, s2, and s3) using the standard port # (i.e. 1880), the the load balancing can be defined by adding the same DNS hostname for all three servers. Individual requests will randomly end up at one of the three endpoints if DNS is configured properly.
If the N node-red instances are all running on the same server, each instance will need to be configured with its own port # (e.g. 18801, 18802, and 18803) -- then you can set up a simple nginx reverse proxy listening on the original port #1880, forwarding traffic to one of the three instances, using a configuration similar to this:
server {
listen 1880;
server_name nodered.example.com;
location / {
proxy_pass http://nr-cluster;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
}
upstream nr-cluster {
server nr1.example.com: 18801;
server nr2.example.com: 18802;
server nr3.example.com: 18803 down;
}
... unless I misunderstood your question, and you really want to pass msg
objects from one instance's flow to the next -- in which case MQTT pub/sub will probably do what you need.
Just tossing this out there. Won't load balancing be a problem if events coming in are dependent on previous data that had entered the flow? If NR1 got a peice of data and stored it in memory to add to the next piece of data, but the next piece of data went to NR2 things might be messed up.
Yup... which is why load-balanced flows can never rely on "side-effects" like using context...
each incoming msg would have to hold all interim data on the msg
object itself, or read/write some external database if communication between instances is necessary. Good catch, Paul.
Need clarity on the following questions .
- Load balancing is achieved by using MQTT server which needs to be up and running at a cloud location. Each node needs to subscribe to the shared topic in order to get the workflow??
-
- If one of the vanilla Node-Red instance goes down, the flow gets directed to the instances of Node-Red which are online. Once the offline node comes online, there is no automatic synchronization of workflows with the orchestrator. The complete flow needs to be deployed from the DNR editor in order to achieve synchronization??
Which suggestion are you responding to?
The mqtt server would not need to be in the cloud, but it would need to be accessible to all servers. However mosquitto (for example) has the concept of bridging where multiple brokers can be synchronised so you could have a broker on each server if you wanted.
Define what you mean by "synchronization of workflows".
What is DNR?
Hi Colin,
DNR refers to distributed Node Red . Please find the link below
I don't believe you had mentioned that you were using that node. I haven't used it so will have to leave it to others to comment. I think the fact that you are running under that strategy may very well affect the best way to do the load balancing.