How fast is Node-Red?

Hi Folks,
I am learning Node-Red and already have implemented together with influxDB and Grafana for basic data acquisition purposes. I would like to also make a vehicle data logger, which require high resolution data and faster acquisition speeds (~300 sps) from CAN BUS and different acceleration sensors etc.
Now the question is, how fast Node-Red can be? I am interested in application particularly on Raspberry Pi. Is it also suitable for bandwidth heavy applications, or only for "low sample rate" IoT?

Thanks
Adam

As usual it all depends. Node-RED can be fast but it depends on the amount of processing required eg database/external access and in particular the hardware it’s running on. Specifically pi before version 4 has relatively slow io. I think it would struggle with the data rates you require.

There is nice thread where speed of Node-RED and many things around it have been discussed. You may find some answers from it.

I have tried to publish 5 floating-point values in 300sps to influxDB and system started to be unreliable. I am using Raspberry Pi 4 4GB.

300 (sps ? what is that) messages per second (?) + I/O's to influx on a SD card, you can wait for the moment it wears out, I give it a week. Instead of dumping format it and insert it into influx, stream the data to a logfile and after your session is finished, parse it, and batch it into influx.

Why not use dedicated CANBUS software like python-can ?

With sps I mean sample per second, from all 5 sensors.
You mean, it will wear out the SD card?
Actually, I was also thinking to log the high resolution data to something like HDF5, then dump it. But I have not decided where to start and what is the correct way to do it.
I have been using Python and SocketCAN since long time ago. But my intention is also to learn new ways and make more fun projects.

I can imagine that 300sps is no issue for the pi as long as you don't write it to a (relatively slow) medium that demands back and forth I/O over a bus. If you intend to perform parsing/transforming actions on each message you will find bottlenecks along the way. I am curious how many a pi could handle - without writing

I have a dashboard which is visualising incoming data from a number of remote monitors, doing maths on 3 parts of the messages (at least 20 MQTT messages a second), and parsing the rest using function nodes and a smooth node and it shows no sign of struggling.

Easiest way to check it will be setup a few inject nodes and speed them up till it breaks.

With 300 messages per second from 5 sensors CPU usage goes up to 100% and it is only using basic parsing and splitting.

Is that 300 msgs per sensor (or 1500 msg/sec) or a total of 300 msgs/sec from the 5 sensors?

Well it doesn't matter the count of readings per second. (OK it does but that is not the only thing to look or blame) If you are hitting some limits already (CPU 100%) then you'll need to figure out the bottleneck and try to optimize it. If you are looking for help on that, you'll need to share sample of your data and a bit flow with data manipulations and then we probably can say something reasonable or give advises about it.

300 messages per second from 5 sensors are inserted in single query.

Yes, you have right. It should not be a problem. The amount of data is not that big.

This is how my flow looks like (do not care about Route messages, it is for debugging purpose only.):

"Parse message" node:

var outMsg = {};
var values = msg.payload.trim().split(",");

outMsg.rate = values[0];
outMsg.xaxis = values[1];
outMsg.yaxis = values[2];
outMsg.zaxis = values[3];
outMsg.temp = values[4];

msg.topic="sensor-data";


if (outMsg.rate!==undefined) {
    outMsg.rate = parseFloat(outMsg.rate, 10);
} else outMsg.rate=-999.0;

if (outMsg.xaxis!==undefined) {
    outMsg.xaxis = parseFloat(outMsg.xaxis, 10);
} else outMsg.xaxis=-999.0;

if (outMsg.yaxis!==undefined) {
    outMsg.yaxis = parseFloat(outMsg.yaxis, 10);
} else outMsg.yaxis=-999.0;

if (outMsg.zaxis!==undefined) {
    outMsg.zaxis = parseFloat(outMsg.zaxis, 10);
} else outMsg.zaxis=-999.0;

if (outMsg.temp!==undefined) {
    outMsg.temp = parseFloat(outMsg.temp, 10);
} else outMsg.temp=-999.0;


msg.payload = {
    rate: outMsg.rate,
    xaxis: outMsg.xaxis,
    yaxis: outMsg.yaxis,
    zaxis: outMsg.zaxis,
    temp: outMsg.temp
}

return msg;

Thanks
Adam

Is there some know rules about incoming data? Is it guaranteed that you always have comma separated string with 5 elements? Can the data be still considered valid, if some value missing?

If you disconnect the influx node, is the cpu utilisation the same ?

Did some optimizations for your function. Left in comments to explain the changes.
As I don't have any PI (or similar low performance machine) I really can't say will it help or how much it changes .

For performance gain, node-red-contrib-unsafe-function (node) - Node-RED is faster than regular function.

var values = msg.payload.trim().split(",");
// the need of trim.() should be avoided at data creation.

if(values.length < 5){
// if parsing does not create all elements, then do nothing and return early
return
}

// data considered to be ok. create output.
msg.topic="sensor-data";

// removed outMsg object
// temporary object creation, if not needed is just waist of memory.
// Lack of memory affects also CPU usage.

// parseFloat does not provide radix option!
// parseFloat(string, radix) does not fire any error but also has no effect

// parseFloat(string) returns NaN if value can not be converted to number
// if NaN happened, assigne default (-999.0)

msg.payload = {
rate: parseFloat(values[0]) || -999.0,
xaxis: parseFloat(values[1]) || -999.0,
yaxis: parseFloat(values[2]) || -999.0,
zaxis: parseFloat(values[3]) || -999.0,
temp: parseFloat(values[4]) || -999.0
}

return msg;

1 Like

Also note that having two nodes connected to the output of a node causes Node-RED to clone the object.

So removing the debug is likely to have quite an impact.

If you really want to output a debug, might be better to use node.warn inside your function node. Though I think that the debug output panel may also not be that fast so may also have an impact (not sure though, don't hold me to that).

@TotallyInformation - re debug - yes you are right... anything you can do to not fork and not send to the side panel will speed it up - so once you are sure it is sending what you want - disconnect / remove them. (Though turning it off will stop them being sent also so that will help.)

5 Likes

This is good to learn. Does this also apply when there's a "turned off" debug node in the flow?

As Dave says, turning it off will stop the debug msg being sent which helps but I think that there will still be a msg clone operation. Not 100% sure though.

Yes, if it's wired, it will clone.

2 Likes