High throughput examples (for arguing for the platform)


Does anyone have examples, use cases where NodeRED is handling alot of transactions/messages. I just need it for "ammunition" as conceptual proof the platform can handle a higher load. I fully understand there is a huge difference in how different scenarios would affect the capability here, but things like:

We log all cars going in and out of our mall garage, and thats about 400 transactions an hour


We handle 300.000 transactions from our production tracking system every day to keep track of individual parts for quality control

That would be super helpful

Thanks in advance :slightly_smiling_face:

1 Like

Welcome @werkstrom to the forum

Your work load is a transaction every 9 sec on average.

What do you want the flow to do?

As what it does will influence greatly the performance you require.

A simple flow like

running every 9 sec, or some thing very complex calling on API and reading and writing to DB are two different things.

But bottom line Node-Red can handle very complex task with ease.

Also bear in mind that your PC needs to be up to the task as well, you can preform a lot with a PI and Node-red ( not saying this is the solution for you)

Hi @ScheepersJohan and thanks for answering my call.

To clerify a bit, We do use NodeRED and have been for some time in different scenarios for POC use. Right now we are for example experimenting on setting up load balanced instances in Azure Webapps. But we also have several other cases. The reason for asking is if anyone has examples of their personal usages with high throughput. To kind of give me some ammunition in comparison to other solutions we have. For example handling road tolls for a couple of million passes a day or large invoice flows are things we do using more traditional types of ETL tools or integration platforms and in some cases the flexibility and adaptability of NodeRED could be of use. I just want to be able to "bite the head of the snake" when others ask about scalability and performance. Preferably by being able to exemplify, and that's where I was hoping for som help/insight :slight_smile:

Here is flow that wright to a DB every few min - that is very light.

But the recall from the DB and mapping it can be a bit more, this is a few 100 points from a DB mapping tem on and map.

This is running on the smallest AWS EC2 instance


1 Like

The uses cases are from IOT platforms to Industrial applications, home automation to animal tracking.

Search a bit on the forum and you will find 1000 of different examples.


Hi again,

Yes, I get that to :slight_smile: But I'm looking for user stories and experiences, not flows that does those thing. But, I very much appreciate your input and help. Thank you,

Quite a few folk round here are using it for video handling so that's fairly intensive for small devices.

I've certainly used it for simple mqtt transformations at low thousands per second on a decent laptop. By that I mean just in, simple function and back out. No database lookups or other async tasks.

I know we had some large financial organisations trying extreme scalability with large iron and things in parallel getting way up there, but of course a lot depends on how you architect the solution.


Thanks @dceejay, that's precisely the type of very high level exmples I would need. I fully understand that the type of workload you put in your flow will dramatically influence the load, but high volume loads and complex logic generally pose a problem anywhere :wink:. The important thing is to know it is indeed possible to scale. Not necessarily to describe how. Thnk you again for your kind answer.

One of my flows operates every 30 mins, grabbing data from over 220 robots (http), converts data from html to topics and payloads (for MQTT) and XML (for external system) and writes data to database (for historical access/reporting). That's over 10000 ops per day. And there is room to comfortably speed this up to run every 5 mins (which would be over 60000 ops per day building 60000 XML files, 60000 database writes and LOTS of mqtt messages)


I believe that our case is a good example of using Node-RED in industrial applications with high throughput.

Just took some acceleration data at 800Hz for four ultra-low power wireless vibration sensors in parallel. Here is the data review. We do this many times a day with sampling rates up to 25.6kHz.


Awesome @Steve-Mcl. That is actually a very interesting scenario with direct application to some solutions made by "legcy" (C#) coding here today. Thanks so much for sharing. :slightly_smiling_face:

While way above my pay grade to understand fully, that is really cool @davidz . A quick followup to understand, do you query the devices and get a batch of measurement data points back at a relatively unfrequent (several seconds at least) interval, or do you read in realtime the current (one) data value and sample that at such a high frequency from NodeRED (that would be seriously impressive). Or du you do it in some other (probably way smarter) way? Thanks alot, :blush:

A good question. The above GUI is for data query. We do lots of vibration DAQs and analyses daily.
The GUI for wireless vibration sensor data acquisition is here. Just finished another round of DAQ at 400Hz. Did higher frequency DAQs earlier, so that we are going down on the frequency to test some new features for future development.

The above DAQ is done manually. We have a timer setup that takes data automatically multiple times a day too.

1 Like

I'm using NR for home automation and have a bunch of sensors reporting every 1 minute. NR enriches data and redistributes data back out to MQTT and outputs to InfluxDB.

Probably my heaviest processing is managing the data from the Drayton Wiser heating system. Because this doesn't have a published API, I have to get the full XML from the controller every minute, analyse what has changed and then redistribute that data to MQTT and InfluxDB.

I know that we've also seen people trying to do processing on the ms level. That can have mixed results due to the nature of Node.js and Linux - all the possible interuptions make accurate timing somewhat variable. But I think that we've certainly seen processing of thousands of messages per second even on single-board computers.

I think that the biggest constraints we see are memory and IO.

1 Like

I had a task to integrate a virtual cash register for printing checks into an existing payment system. It was also required to send a notification to the phone and email. I cannot say that this is a high load. Performs about 1000 operations per day. I couldn't find anything easier. Undoubtedly it could have been done in nodejs. And... perhaps there are no other options. I am ashamed to admit, I first wrote the integration, and then began to deal more closely with node-red.

1 Like

here's another small example

but high volume loads and complex logic generally pose a problem anywhere

I think that is the key: what complex/logic operations are you planning to perform ? It can be bogged down by all kinds of factors, but they can be optimized as well. nodejs (what node-red runs on) by itself is very fast.

I don't think it is hard to emulate real-world data, launch a VM and run it against node-red in some test setup and measure/collect any findings. It is the only way to determine if it is the right tool for the job.

1 Like

That's really interesting @TotallyInformation. Handling 1000 msgs/sec would add up to several million an hour. And if the case is really simple like reading, complement and adding a record to a database (like a log value or a license plate) would be feasible. Thanks for the info.

@RootShell-coder That is also a very good real world example that shows reliability. Thank you so much for sharing.

@bakman2 Totally agree, and we are doing that. However, the power of being able to point to others cannot be underestimated. :slight_smile: But you are absolutely true when it comes to actually validate different real world use cases.

1 Like