I'm using Node-Red to send data from a PLC to InfluxDB and later display it in Grafana. However, I am in doubt about the number of variables, if there is any limitation of Node-Red regarding the number of variables.
Hi @Marsantos Wlecome to the fourm.
Node-RED will have the same limitations as any other application. It will be bound by memory and the influxDB itself. There are no specific fixed limits in Node-RED. The node you chose to use might have some limitations (but that is doubtful)
Why do you ask?
What have you tried?
I got it, thanks
Today I'm reading more than 70 variables from a PLC, and so far it's working perfectly. But this application can grow more (going from 140 variables), and I have this doubt if I can continue using Node-Red for this function.
That depends how you collect the variables.
If you read them all at once, then this is a non issue.
If you read them individually, at a fast rate, you are asking for problems - data inconsistency/inaccurate being one of them!
If you are interested in understanding further what I mean, tell us about your setup. What type of PLC. What communications protocol you are using. How your flows are structured (i.e. reading 1 address at a time, reading multiples) etc etc.
Steve
I'm reading variables from some Rockwell PLCs using the (eth-ip node) block. I configure the variables for InfluxDB with the (influxdb out) block.
Ah, unfortunately, I do not know how that node is written so cannot understand if it reads values one at a time or in bulk.
What I can say is: due to how node-red sends a msg
wires down each wire, when it has to branch off to multiple nodes, it creates a clone for each wire.
This in itself is not really an issue (140 variables, approx size 8 bytes, some additional overhead like the JS msg
object - lets round up to 128bytes per msg
) then 140 clones is 128x140 === 17920 (17kb) - this is not gonna stress Node-RED one little bit. However, hitting 140 individual Influx DB nodes in one go might!
You could restructure your flow to be more dynamic by parsing the data coming out to form an array of updates then either split this (split node) to make individual updates OR look into bulk influx operations.
e.g. READ DATA ->> parse data into an array --> bulk update.
Thats it - 3 nodes for 1 read or 1000 reads.
Unfortunately, my Influx foo is not sharp enough to help you further.
Hopefully you can look into that yourself or someone with more Influx knowledge will pop in and give you a leg up.
Best of luck.
Thank you very much Steve!
I'm doing something similar to what Steve suggested, (albeit just 11 data points).
Instead of using the influxdb batch
node, I use the influxdb out
node (because I use the same tags for all energy data and the measurement is specified in the output node).
I format the "parse data into an array"
like below.
If you need to use different tags & measurements for each datapoint then it would need to be structured differently, and of course use the batch node instead.
...
const output = [
[{
grid: grid,
solar: solarW,
solarDay: parseInt(dailySolar),
diverted: diverted,
yesterday: energyDiverted.yesterday,
today: energyDiverted.today,
voltage: volts,
usage: usage,
temperature: rempC,
rssi: rssi,
time: timestamp
},
{
tag1:"node10",
tag2:"energy"
}]
]
var msg1 = {payload: output}
return msg1
I don't understand how that flow works. You appear to be sending the same data to all the influxdb nodes.
@PaulReed I'm going to try this model here like you and @Steve suggested
@Colin in my case I am addressing all the variables within the first block, and in the others I am passing the specific variable
Sorry, I don't know what you mean by that. It is what is going into the influxdb nodes I am interested in. In the upper group you appear to be sending to same payload to multiple influxdb nodes.
@Marsantos - I don't suppose you are using home-assistant
nodes by any chance?
This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.