I have a flow which send MQTT payload to GCP IoT core. I need to run performance testing. So wanted to check on the feasibility on how many maximum flows can node-red support.(I need 30,000 flows)
My plan is to duplicate the flow with different device properties and send data to all devices at the same time using MQTT and iot telemetry send nodes
In this case what exactly do you mean by flows?
30,000 sounds like there must be a different/better way to do things
Below image has 2 flows which sends payload to 2 devices whenever it receives data from a single MQTT topic /ZZIRTEST036. So I want to multiply this 2 flows to 30000 flows.
What is the end node? Can it by dynamically addressed? If so, then you write 1 flow to send to 1 or 30000 devices.
My end node is GCP-IoT telemetry send node. Where device ID used has to be dynamic.
If I use a function to loop the devices dynamically, only one device will get the payload a time and then I have to send the payload again to the next device in the list and it continues.
But since I want to send to send the data at the same time to more than one device, Is it possible to achieve that with one flow?
Nodejs is single threaded. Even if you wire them in parallel, they will still be triggered one at a time.
The only benefit I can see is if the
GCP-IoT telemetry send node is a blocking node then perhaps yes, wiring in parallel might help.
However, by the time you configure 30000 of these nodes, you may be 1 or 2 years older
what does this "GCP-IoT telemetry" node do under the hood? Send an MQTT message? Call a rest endpoint? Store something in a database? Call off to the cloud? etc.
Ha! I'm actually using the /flow API to automatically import the flows in my node red instance by dynamically changing the device ID using python script. So creating flows automatically with different device ID will not be an issue.
GCP-IoT telemetry send node sends the MQTT message to GCP IoT core devices.
I tested with 30 flows and whenever a message sent by MQTT out node, it is captured by MQTT in node and rerouted to all 30 devices and in turn I can see the data in our bigQuery tables without any data loss. But not sure if I can expect the same result when I scale the 30 devices to 30k devices.
Since it is single threaded, does that mean I will not be able to achieve triggering multiple flows at the same time?
In a sense, yes. Although nodejs is single threaded it is very capable and highly asynchronous.
If you consider that the transmission of MQTT data is also (kinda) single threaded (since the publish packets are transmitted 1 at a time serially), then you can see you have other bottlenecks apart from nodejs.
My point is, dont try to optimise until you know your bottlenecks.
Lets put it this way, If you loop an array of 30000 vs 30000 copies of them 5 nodes and find the difference from MSG 1 to MSG 30000 is negligible, then why over complicate.
Okay, I will try to loop in an array and will try analyze the difference in time for data received by device 1 and device 1000 initially.
can you please elaborate a bit more?. I still don't quite understand what are you trying to achieve...
- is GCP IoT core a mqtt broker?;
- when you say "devices", are you talking about mqtt clients, brokers, or both?.
In the context of your question: Are you trying to monitor the performance of the brokers? the mqtt clients? both?.
GCP IoT core is a service provided by google cloud to for connecting nad managing the IoT device data.
My Testing requirements : We have IoT devices which sends MQTT messages to GCP IoT Core. We are trying to check the performance of our portal and gcp instance when we have 30k devices continuously sending data .
So i'm trying to simulate the IoT devices in node-red using flows which will send the mqtt messages to our GCP IoT core instance. So a flow in node-red will act as an one IoT device sending mqtt data to GCP cloud. Instead of having 30,000 phycial IoT devices for testing which is not a possible setup. So I wanted to check if this can be achieved by node-red simulation and what issues I might face.
As you are not answering any of my questions, I will assume IoT core is a broker (with a fancy commercial name) and what you call “devices” are simple mqtt clients. This would be a classic IoT mini-deployment.
30k clients, even if each of them is publishing to its own topic, is nothing in terms of load for a mqtt broker; a Raspberry Pi would handle your devices. I don’t believe it’s worth you dedicate a minute to test the Google backend.
That said, if you still want to simulate 30k chatty mqtt clients publishing to your Google backend, you not need to use Nodered for that, grab a simple script and start 30k mqtt clients… any laptop manufactured on the 21st century would do the job.
Yes, IoT Core is a broker and I have to simulate 30k mqtt clients that send data at the same time(which I call here as Devices).
We are exploring option on how to proceed as we are new to this kind of edge device simulation. Thanks for your suggestion.
I must admit that, if it were my project, I would at least want a 1/2 dozen devices running 1 or more instances of Node-RED. Trying to do performance testing of a system with only a single input isn't a good starting point.
Yes I agree. We are also thinking of using multiple instances.
You made me remember some work I did years ago for Netscape when I did some large-scale data import performance tests (50 million identity entries). That was a fun little project.
This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.