The Node-Red application is currently running in a docker image, on an older version of Docker running on HyperV.
In this flow I want to read out different OPCUA tags using the "node-red-contrib-opcua" nodes with a READ function, I don't use the OPC-subscription because is want to push these tags to an influxDB every second to create graphs and do a calculation on some variables. (Flow below)
When reading al the tags at a rate of 5 full reads per second there is no problem, but when reading it at a rate of 1 per second Node Red keeps crashing. (With one read I mean reading 90 different OPC tags)
Does anybody have an ideaa on how to prevent this crash?
I need the data every second because i want to make a power totaliser.
I assume the CSV file you read in is (normally) unchanging? If it remains the same (or mostly stays the same) there is no need to read that file in every second. Read it once, do the CSV and function once and store the data in context. Then separately when you poll the OPCUA server, use the data stored in context.
Secondly, you seem to be splitting the data to do 90 individual reads. If we assume the OPCUA communication takes (on average) 15ms, then 90*15ms = 1.3secs. In other words, you have a build up of messages as it still has quite a few msgs "in the queue" (so to speak). On top of that, you will get inconsistent data (as the first TAG will be 1.3 secs younger than the last tag read). On top of that, OPCUA has its own overhead.
There are a few of approaches to improve this.
Use the readmultiple function of the OPCUA node and ditch the SPLIT/JOIN nodes
OR Read all tags once (using an inject at start up), store their values in context, then use subscribe and update the context values as and when new values arrive.
OR A better solution (that would give you consistent data): Ditch OPCUA (use the s7 nodes) then in your PLC, move all the values to collect in to 1 contiguous block of memory, read the full 90 items in ONE go - this means every piece of data is consistent to the other (read on same scan of PLC)
I’m currently working on my thesis setting up an energy monitoring system for a tank terminal.
I already used S7comm in previous versions to communicate directly with the PLC's, but want to make the system as universal connectable as possible therefore I use OPCUA, for non PLC or Siemens components.
Do you have any idea how to provide the nodeID's to the OPCUA node for the multiple read function?
Currently I’m injecting an array with the nodeID's (example below), but this doesn’t seem to work. The node only displays "nodeId stored" but doesn't read anything.
I also find very little documentation about these OPCUA nodes.
At the moment, you must use the OPC-UA Item node to add each separate item to be read. If you have several parameters, I suggest you adding them in a subflow to keep everything in check:
First of all, you have an automated inject that will clear the items when the flow starts.
A bit later, you add all the items (i send one pulse into the subflow that contains all the OPC-UA items, I read actually from 20 different modules on each machine, I just showed one of them).
Then after 5 seconds, I start polling the OPC-UA client with readmultiple commands, and get all the parameters at once.
The output is a double array, one for value objects and one for item names (with a corresponding index). Because it is a mess and does not includee readable variable names, I had to build a function for that that will scan the return array and check the variable address strings and correlate each address to the variable I'm looking for (fix items). Then another function takes each variable and creates an object separated by machine modules using the variable names (normalize object):
Because I had several issues with this, I made a suggestion in Github to the node programmer, to have an option similar to the S7 nodes, where you can upload a CSV file with all the OPC-UA node IDs and variable names, and get already a normalized object or array at the output. He said he'll take a look, but in the mean time, I found this to be the best workflow for me.
Reading out all the OPC variables with the multiple read function eventually worked out for me, just needed to trigger it by sending an msg.multipleread each read cycle.
Given the fact that I need to read out more than 320 variables, is using the OPCUA Item node not an option for me. I used an array with all the OPC ID’s in that I inject once at the start in the client node.
From the client get individual messages back which I combine in an array, hoping the sequence of ID’s didn’t change I match the IDnames with the values by combining the name array with the value array.
The OPC-UA client will return to you a single message with all the variables in an array inside the payload.
The message also has the msg.items array with all the node IDs that you read, and the indexes correlate in both (so you know which value corresponds to which node):
Interesting, so if I make an OPC Item node I can directly combine the tag name with the tag value?
Do you maybe know a way to implement this tag name into the array I send to the client at initialization?
I currently only send an array of tag ID’s, but It would be nice if I could implement the tag names instantly into this array.
At the moment it is not possible to do that. This is what I discussed with the node creator at his Github site.
To solve this, I have an auxiliary node that scans all the msg.items array at the output and sets the proper tag name on each item (basically a for loop with a massive if-else if-...-else if- else function):
Because the machines have several modules, I named each tag mXX_tag-name.
Then a second node loops once through both arrays (payload and items), reading the parameter values and the tag name, and inserts them into an object: