What spec server would i need for my business

I am planning to use node red docker container for production.
What spec server should i expect to run 100 node red container?

Hmm, that is a very open-ended question I'm afraid.

The answer would depend on a lot of factors such as what else do you need running? What level of resilience do you need? What kind of networking will be used? How much data will the instances be handling? How complex will the flows be?

If it were me, I'd probably start with a test system to build and test some example flows. Then I'd build an evaluation server to test a number of instances running under a synthetic load. From that I would architect a solution around the discovered resource usage and the level of resilience required.

1 Like

And as a extension of this from Julian

If you are planning on running that number of containers - then you should be doing it as a Cluster - probably using Kubernetes or some other manager/orchestrator - so you would further need to expand on the answers to take that into consideration.

It could be something as simple as 4 or 5 Raspberry Pi4 Compute Modules for instance - or if you have large data streams, storage and the like it may be better done through a VM Cluster to enable snapshotting and instant recovery/fallback

What is the uptime requirement ?
Will this be running Mission crticial code/processes that need to be recovered in a defined timeframe ?

Craig

Firstly you need to consider the workload of your Node-RED flows, are they dealing with large messages, is there a lot of external IO, do you have any nodes that are doing CPU intensive work.
A basic flow thats just handling something like regular sensor data or simple messages should run fine with 256MB of RAM (you might get away with lower but I would usually stick to 256)
Then you will want to use something like k8s, 100 instances is quite a lot to put on one machine, if you use memory as your main factor then you would be looking at about 30 instances on an 8GB machine so 3 or 4 servers would be needed. This also gives you a level of spare capacity with some space to spin up new containers to replace dead ones.
You also need to consider the storage for your flows and config, by default Node-RED uses the filesystem, but in a container that filesystem is ephemeral, you can mount it externally from the container so that it persists after a restart or there are plugins to use a different storage platform like a database, with 100 containers across multiple machines I would strongly suggest you use a database for storage.
Finally there's management of the individual containers to consider, do they all have the same username and password? Do you need to change the password, this is easy enough to do manually for a small number but with 100 to manage you will want some sort of centralised management, again there are plugins out there for auth.

There are also a number of companies offering hosted Node-RED instances or platforms to manage some of this, flowforge.com is one of these and we are opening up our hosted platform shortly, we also have an open source Community Edition that you can self-manage.
Full disclosure I work for FlowForge

1 Like

OK my bad. Each instance would be having around 50 nodes including sensors and switches... mainly switches thouhg. I would be using AWS MQTT and node red dashboard. I am planning for each home to have a separate container.
Also I am planning to run it on aws ec2 instances for higher availability.

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.