High throughput examples (for arguing for the platform)

Just remember though, as bakman2 says, it is as much about processing as throughput. Since Node.js is largely single-threaded, you need to be mindful of that. Though you can use worker threads if needed & careful use of async processing helps things keep going smoothly. You also have to bear in mind the overheads of the OS. As mentioned Node-RED and Node.js are not real-time systems so anything requiring accurate (ms or below) timing should be done elsewhere normally.

Node.js uses a garbage collection process that can put a real dint in a critical process so getting a performant Node.js based system is about ensuring that the whole infrastructure is in balance. Very rarely an issue for home automation unless you overload the device you are running on. But can be harder for SME-level infrastructure (not so much on enterprise-level since you would most likely have access to larger virtual servers and could more easily tweak them).

You might also want to think ahead about scalability. Will your system need to scale up? If so, you need to make some plans early on so that your architecture will support it. Scaling up the system Node-RED runs on is easy enough to get your head around but if you want to scale "out" (e.g. clustering), that is a harder design. One thing you can do is to think about where your flows might logically be split so that they could be pushed onto separate instances of Node-RED if needed. That might save you having to invest in clustering designs.

1 Like

@TotallyInformation : Fully understand and totally agree. We are actually implementing all of the above for testing. First of all we plan on primarily running in Azure, so we hope scaling of the system/service should be fairly easy (we do it all the time with our other web apps) We are testing right now different scaling/load balancing/clustering scenarios, including PM2. I'm not doing that part myself so unfortunately I cannot provide info on findings :slight_smile: We also plan on setting up different NodeRED instances for specific tasks that has specific demands i.e. "close-to-real-time" is kept well isolated from "common" flows and "heavy/long-running" flows also have their own instance... How that mix will look like will surely vary quite a lot. Your input is very valuble as it provides context and confirmation to testing and arguing for the platform. It is greatly appreciated, thank you.

1 Like

We have a system running on a customer site reading 131 air quality sensors with 15 data channels each. That is close to 2000 data points. We trigger a poll every 10 minutes starting with obtaining a token from an API. Then reading the 131 sensors. There is a 3 second delay between each sensor to avoid overwhelming the API and NR. The data from the sensors is then evaluated in NR for quality and send to our proprietary middleware through another API were it is exposed as BACnet objects for a Building Management System.
Some lessons learned: introducing the delay between the sensor reads and I had to increase the heap size for node.js to be able to cope with the amount of data.

1 Like

Hi, i am using NR in an industrial environment.
On some usecases i have to read >50 Datapoints every 50milliseconds, manipulate the data and send it another aplication. All in All nearly 2000 Points per sec - so 7.2 actions per hour. And thats not all, because theres is running a lot of normal stuff in the background - dashboard, notifications,... And if you want, you can do much more.
My flowfiles are about 7mb big, im running NR on an i3 two core and just use about 15% of the cpu and thats maximum. So NR can handle realy big things. :slight_smile:

2 Likes

Nice example, thanks for sharing that.

I have a rather complicated example that I have actually benchmarked to the limits, testing out what can run on different hardware. Since for all intents and purposes Node-RED is single threaded, I tested to the point where a single CPU was maxed out; Node-RED was usually running and responding fine but I backed off a bit assuming that running max was not a good idea in production. Obviously, using containerization can more fully utilize server hardware.

The application is an Oil and Gas well simulator, it simulates the physical well, pressure, equipment, and typical automation hardware (RTU). The purpose is to aid in the development of more advanced control and optimization logic (Python, ML, AI etc.). The typical simulator (tick) advances state every second, but instead of running real time, you can speed it up to get a days' worth of data in a few minutes or alternately you can also run as many simulators as theCPU can handle.

Here is a screenshot of the well's current data UI

The configuration screen where you can view and adjust how each simulator operates

The main logic doesn't look all that complex but it typically runs once per second for each simulator

Alright and now to the benchmark:
Each simulator instance has:

  • 113 Global Variables
  • 30 Functions
  • 747 lines of JavaScript
  • 100 tags published via MQTT or some other protocol every 5 seconds

How many simulators can comfortability run on a single Core of different CPUs:

  • Raspberry Pi 3 (Broadcom BCM2837) - 70 simulators
  • Raspberry Pi 4 (Broadcom BCM2711 Cortex-A72) - 200 Simulators
  • Intel i7-8650U (laptop) - 750 Simulators
  • Intel XEON E5-2650 (workstation)- 1000 Simulators
    NOTE: This particular XEON is 10 years old. The i7 is 5 years old. I am sure the benchmark would be several times better on the latest hardware.

Some wrap up totals, lets just pick the Xeon processor for more wow factor...

  • 20,000 MQTT messages published/second
  • 747,000 lines of JavaScript crunched/second
  • Approx 339,000 global variable read/writes per second

I do use this application every single day, adding more functionality all the time. I have about 4 tiny VMs running it (not to the max) with a few dozen simulators each to aid in SCADA Host system development. It runs for months at a time with never a single crash.

6 Likes

And I keep forgetting... Rapanui run their entire T-shirt factory operation using Node-RED - Rapanui Clothing — The MagPi magazine

1 Like

Cool @sniicker. That's a cool example that shows the throughput potential well. Thanks for sharing it. :+1:

@jhottell Seriously... That is probably one of the most impressive examples I have ever seen. Also awesome with the bench marking. Kudos and thanks for sharing. :muscle: :clap:

@dceejay Another great example. Thank you again :+1:

Just wanted to say that all these are very impressive examples! Gongrats to all of you involved in these projects.

1 Like

Hi, one of our customers is using one node-red instance (Windows 10 PC) which connects to some 20 machine controls (PLC's) via opc-ua, Siemens S7, and some other controls:

  • average read cycle per machine: 500 millisecs
  • average amount of items: 10
    (so access to 20 machine tools, each reading plc data every 500 millisecs)
1 Like

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.