Manual join of two separate paths (not split) to retain needed data

I really struggled with the topic but here's where I'm having an issue...

I am pulling data from one Tulip Table node to lookup data in another Tulip Table and then take that data in update the original table.

The problem is the Tulip API Node Red nodes do not retain ANY data even if I "move" it out of the way.

I tried using flow variables but the message rate is too fast and the variable may be changed by one message before the update starts updating the wrong record.

So what I've done is passed the same message to the Record Lookup node and to a join node and I want to merge the data back together preserving the payload from the output of the record lookup but also preserving the record ID I need to update the first table.

Below is the horrible spaghetti flow. The MRB Inventory node looks up 100 records at a time (Tulip limit) and then splits the array into 100 messages which get looked up by the Lookup Discrepancy Log node. The change node is where I was attempting to preserve the ID column but it doesn't work because the output of the Lookup Discrepancy Log preserves nothing.

You can use this technique to protect a section of flow by only allowing one message to pass through at a time. So you can use the flow context method and don't allow the next message in until you have picked up the context value again.

[{"id":"b6630ded2db7d680","type":"inject","z":"bdd7be38.d3b55","name":"","props":[{"p":"payload"},{"p":"topic","vt":"str"}],"repeat":"","crontab":"","once":false,"onceDelay":0.1,"topic":"","payload":"","payloadType":"date","x":120,"y":1700,"wires":[["ed63ee4225312b40"]]},{"id":"ed63ee4225312b40","type":"delay","z":"bdd7be38.d3b55","name":"Queue","pauseType":"rate","timeout":"5","timeoutUnits":"seconds","rate":"1","nbRateUnits":"1","rateUnits":"minute","randomFirst":"1","randomLast":"5","randomUnits":"seconds","drop":false,"allowrate":false,"outputs":1,"x":290,"y":1700,"wires":[["d4d479e614e82a49","7eb760e019b512dc"]]},{"id":"a82c03c3d34f683c","type":"delay","z":"bdd7be38.d3b55","name":"Some more stuff to do","pauseType":"delay","timeout":"5","timeoutUnits":"seconds","rate":"1","nbRateUnits":"1","rateUnits":"second","randomFirst":"1","randomLast":"5","randomUnits":"seconds","drop":false,"allowrate":false,"outputs":1,"x":780,"y":1700,"wires":[["7c6253e5d34769ac","b23cea1074943d4d"]]},{"id":"2128a855234c1016","type":"link in","z":"bdd7be38.d3b55","name":"link in 1","links":["7c6253e5d34769ac"],"x":75,"y":1780,"wires":[["3a9faf0a95b4a9bb"]]},{"id":"7c6253e5d34769ac","type":"link out","z":"bdd7be38.d3b55","name":"link out 1","mode":"link","links":["2128a855234c1016"],"x":645,"y":1780,"wires":[]},{"id":"b23cea1074943d4d","type":"debug","z":"bdd7be38.d3b55","name":"OUT","active":true,"tosidebar":true,"console":false,"tostatus":false,"complete":"payload","targetType":"msg","statusVal":"","statusType":"auto","x":650,"y":1620,"wires":[]},{"id":"d4d479e614e82a49","type":"debug","z":"bdd7be38.d3b55","name":"IN","active":true,"tosidebar":true,"console":false,"tostatus":false,"complete":"payload","targetType":"msg","statusVal":"","statusType":"auto","x":450,"y":1620,"wires":[]},{"id":"3a9faf0a95b4a9bb","type":"function","z":"bdd7be38.d3b55","name":"Flush","func":"return {flush: 1}","outputs":1,"noerr":0,"initialize":"","finalize":"","libs":[],"x":170,"y":1780,"wires":[["ed63ee4225312b40"]]},{"id":"7eb760e019b512dc","type":"function","z":"bdd7be38.d3b55","name":"Some functions to be performed","func":"\nreturn msg;","outputs":1,"noerr":0,"initialize":"","finalize":"","libs":[],"x":530,"y":1700,"wires":[["a82c03c3d34f683c"]]},{"id":"e35f37deeae94860","type":"comment","z":"bdd7be38.d3b55","name":"Set the queue timeout to larger than you ever expect the process to take","info":"","x":250,"y":1580,"wires":[]}]

and ideally you should report the issue to the author of the nodes - and see if they are willing to fix them to pass through data correctly.

I was afraid that would be the answer :slight_smile:

Ok, crazy idea...

Could I save the information I need in flow variables and reconstruct the message in a template (or function) node and then somehow use a trigger node to wait until the message has been reconstructed before allowing the next input?

Did the method I suggested not work?

For now I'm treating that as a last resort because I have about 18,000 records to update so I was trying to come up with a novel solution using some kind of FIFO or queue so I would know when each record was processed.

That is exactly what the flow I posted does.

Whoops, I think I got stuck on the time based delay method and failed to understand what you did.

That ended up working quite nicely. Thanks!

1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.