Monitor Large Array of Objects for Changes

Hello Everyone,

I am trying to monitor a large array of objects. Each object contains a Machine ID and a Mold ID coming from a SQL database. I want to monitor for every time we change our Mold ID in a particular Machine record that change as a "Currently in Machine".

My issue is I want to see if someone out there knows of an easier way to obtain this. Shown below I have a trigger that runs the Query on SQL the output then goes through a split node and then I have a switch node that allows each message through its particular machine path. I then have a filter node on each of these switch node outputs and then I will save each msg.payload.MoldID to a global variable to reinsert back into a new table in the database.

Starter Flow

SQL Output Debug

Split Node Output

Each Message Split into Switch node that looks at msg.payload.MachID

I want a way to just possibly use a for loop for this but I am not sure where to even start with that. Any help would be great...

Thank you!

How about letting the database do some work?
An SQL query could identify machines where the mould id is different from the previous record and no matching record exists in the second table.

The issue with that is the contractor who set up our database doesn't insert a table with all current molds in those machines. The way I am grabbing this information is looking at current running machines and only pulling Machine ID and Mold ID. Not all machines are currently running so I thought I would run this query multiple times a day until all my global variables are storing values that way when a job is completed and that machine or row falls out of the currently running table I still have a mold id stored for that machine even when it is no longer able to be seen. Does that make sense?

Well if you want the database to record mound ids against machines you certainly need a table of machine ids and a table of mould ids.

Clearly there is scope for acquiring new machines or moulds so you need manual or automatic processes to update those tables.

My personal feeling is that if you need to store the results of a db query, the most obvious place to store it is in the database, not in context storage for an application on a PC somewhere. Whatever suits you.

Similarly I do data extraction & manipulation in the database rather than dragging the data into Node-red and processing it there. This might be a hang-over from the days when you had to avoid excessive memory use in a Unix box with 64MB. It keeps my NR flows simple though - Inject a complicated SQL query, retrieve the processed result.

Agree with jbudd on this.

However, if you are forced to work with what you have, you are going to want to use JavaScript in a function node. Doing this in nodes alone will be possible but almost certainly complex, large, and painful.

I have a node that monitors a brand of smart home heading controller. Because there is no published API and due to the way the controller works, I do a query of the full API data once a minute. To extract the differences from the previous request, I use a node.js library which can certainly be used with a JavaScript node. The package I used to do the comparison is called "deep-object-diff".

1 Like

jbudd thanks for your input. I went down the route of researching how to do this in just sql. I ended up figuring this out.

1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.