Does "split" node clone messages?

I have a very simple test flow, where I'm executing 2 ls commands in parallel using the split and exec node (in exec mode). One of the commands will fail because of a bad flag, the other will succeed. The successful command will write the output to stdout, and nothing to stderr. The failed command will write "" to stdout and the error message to stderr.

I need to be able to detect when the command fails and when it succeeds but it is tricky because I have to listen on both outputs of exec node and depending on success/fail, there is either 1 message or 2. I'm using a join node to combine the stdout/sterr err outputs of exec into a single message. But what you'll see in this test flow is that my join node produces output that has combined the stderr from my successful command and the stdout from my failed command.

[{"id":"6cfa5596.55c41c","type":"tab","label":"Flow 2","disabled":false,"info":""},{"id":"42748802.2f6e28","type":"inject","z":"6cfa5596.55c41c","name":"","topic":"","payload":"[\"\",\"-z\"]","payloadType":"json","repeat":"","crontab":"","once":false,"onceDelay":0.1,"x":190,"y":260,"wires":[["d111936d.1aaf5"]]},{"id":"d111936d.1aaf5","type":"split","z":"6cfa5596.55c41c","name":"","splt":"\\n","spltType":"str","arraySplt":1,"arraySpltType":"len","stream":false,"addname":"","x":350,"y":260,"wires":[["4295c7b7.cbdd98"]]},{"id":"4295c7b7.cbdd98","type":"exec","z":"6cfa5596.55c41c","command":"ls","addpay":true,"append":"","useSpawn":"false","timer":"","oldrc":false,"name":"","x":510,"y":260,"wires":[["dc4a661f.0783d8"],["3fa26db7.60f9e2"],[]]},{"id":"dc4a661f.0783d8","type":"change","z":"6cfa5596.55c41c","name":"StdOut","rules":[{"t":"set","p":"topic","pt":"msg","to":"stdout","tot":"str"}],"action":"","property":"","from":"","to":"","reg":false,"x":700,"y":220,"wires":[["77bcd05a.03735","26b16e37.68abb2"]]},{"id":"3fa26db7.60f9e2","type":"change","z":"6cfa5596.55c41c","name":"StdErr","rules":[{"t":"set","p":"topic","pt":"msg","to":"stderr","tot":"str"}],"action":"","property":"","from":"","to":"","reg":false,"x":690,"y":280,"wires":[["77bcd05a.03735","ba72c385.9fc1"]]},{"id":"77bcd05a.03735","type":"join","z":"6cfa5596.55c41c","name":"","mode":"custom","build":"object","property":"payload","propertyType":"msg","key":"topic","joiner":"\\n","joinerType":"str","accumulate":true,"timeout":"1","count":"","reduceRight":false,"reduceExp":"","reduceInit":"","reduceInitType":"","reduceFixup":"","x":930,"y":260,"wires":[["df8fb54f.353128"]]},{"id":"df8fb54f.353128","type":"debug","z":"6cfa5596.55c41c","name":"","active":true,"tosidebar":true,"console":false,"tostatus":false,"complete":"false","x":1160,"y":380,"wires":[]},{"id":"ba72c385.9fc1","type":"debug","z":"6cfa5596.55c41c","name":"D-StdErr","active":true,"tosidebar":true,"console":false,"tostatus":false,"complete":"payload","targetType":"msg","x":860,"y":360,"wires":[]},{"id":"26b16e37.68abb2","type":"debug","z":"6cfa5596.55c41c","name":"D-StdOut","active":true,"tosidebar":true,"console":false,"tostatus":false,"complete":"payload","targetType":"msg","x":880,"y":160,"wires":[]}]

Is there something I'm doing wrong in how I'm executing these commands in parallel? I figured that the split node would clone the messages so that the parallel executions would not override each other's data.

You are overwriting the messages, because you send 2 commands and the second one will overwrite the 1st as it has the same topic.

I am trying to figure out how you could do it, but a question remains: why do you want to do this ?

The example I posted is a purely synthetic one based on a much more complex flow I am developing. I boiled it down to this very simple test for easier comprehension in the forums. With the new "asynchronous" message passing in Node-RED, I see my split command as something similar to this:

But it seems that the split node does not clone each of the split msg objects. The same message reference is passed down each "branch" so changes to the msg in one branch are seen in the other, and I did not expect that.

that’s not the reason. The join node will only ever send an output when it has received messages for both topics. So what happens in sequence is:

  • the join node receives a msg for the stdout topic from the first part of the split correctly executing the ls command.
  • as it doesn’t error there is no message with an stderr topic sent so the join node doesn’t send anything because it’s still waiting for the stderr topic.
  • the second command from the split gets executed immediately after the first and errors
  • both a stdout msg and and a stderr message are generated.
  • the empty stdout message arrives first at the join node and as there is still no stderr msg received it again sends nothing but instead overwrites the first stdout value with the new empty one
  • now the stderr msg arrives at the join and now that the join node received messages for both topics it will send a message with the most recent stdout and stderr messages as a key value object
  • your first stdout is lost because of that

You can see / test this happening when you turn around the elements in your input array so that the error argument gets send first. Now you will have a message coming out of the join node that includes the error from the first message and the stdout of the second.


I think I understand now. I'm thinking about the split node all wrong. My brain was stuck in the picture I posted above, where it "feels" like its actually creating 2 separate flows (with separate instances of all the nodes that follow), but that's not actually what's happening. There is only one join node and the messages from both "branches" of my split are hitting that same node. I appreciate your detailed explanation.

Now I need to think of a new solution for collecting the results from the exec node in the success and failure case that is more deterministic.

I don't think that is quite right, as the Join node is configured to timeout after one second rather than expecting two messages, but has also got 'and after every message' ticked, but without a number in the After Number of parts field. So I am not sure exactly how it will respond. However, I am sure that something like you describe is happening.

@greg80303 if you want to know when an exec node has finished then use output 3. There will always be just one message there telling you whether it succeeded or failed.
What exactly do you want to get out of the flow?

I actually didn't mean to have the 'after every message` box ticked. In my real flow (not this example), it is not checked. I un-checked it in my example and it is still producing the wrong behavior. Regardless, I still believe that what I'm doing is basically incorrect and I need a new approach.

What I expect to get out of this flow is "all the data" from every exec. I always need the result code -- that is my source-of-truth for determining success/failure status of the command. But, from there, I need stdout if the command succeeded (that data is used later in my flow), and I need stderr if the command failed (to report in a log).

With two commands though, what do you want to get out? Do you want separate messages for each command, each containing all the data from that command?

I want two separate messages (1 for each command). Each containing just the stdout/stderr/rc for that specific command. If the exec node produced a stderr msg on every execution (even if there was no stderr, it just returned ""), I could do this easily. But because there is a different number of messages output from the exec node depending on success/fail, I can't see how to do it.

If you change the debug nodes to show full message you can see that both stdout and stderr messages also contain the return code. Add a Switch node on output 1 only letting rc=0 through, so now you can just feed that and output two into the next node (no join required) and you will only receive one message, with the rc telling you whether it is stdout or stderr (or you can leave in the topic change nodes if you like and use that.
If you want to know when all the messages from the input array have been processed then feed the combined message into a Join node configured in Automatic and it will combine them all into an array for you and pass it on when all the elements of the input array have been processed.
This (the joining) won't work if you use spawn mode however, as the parts may not arrive in the right order, messing up the join.
If you can't work it out come back and I will modify the flow.

I really appreciate everyone's help. This is how I think have solved the problem:

[{"id":"6cfa5596.55c41c","type":"tab","label":"Flow 2","disabled":false,"info":""},{"id":"42748802.2f6e28","type":"inject","z":"6cfa5596.55c41c","name":"Failure Case","topic":"","payload":"[\"\",\"-z\"]","payloadType":"json","repeat":"","crontab":"","once":false,"onceDelay":0.1,"x":170,"y":220,"wires":[["d111936d.1aaf5"]]},{"id":"d111936d.1aaf5","type":"split","z":"6cfa5596.55c41c","name":"","splt":"\\n","spltType":"str","arraySplt":1,"arraySpltType":"len","stream":false,"addname":"","x":350,"y":260,"wires":[["4295c7b7.cbdd98"]]},{"id":"4295c7b7.cbdd98","type":"exec","z":"6cfa5596.55c41c","command":"ls","addpay":true,"append":"","useSpawn":"false","timer":"","oldrc":false,"name":"","x":510,"y":260,"wires":[["21ac8417.bb3ffc"],["3fa26db7.60f9e2"],[]]},{"id":"3fa26db7.60f9e2","type":"change","z":"6cfa5596.55c41c","name":"StdErr","rules":[{"t":"set","p":"topic","pt":"msg","to":"stderr","tot":"str"}],"action":"","property":"","from":"","to":"","reg":false,"x":670,"y":280,"wires":[["77bcd05a.03735","ba72c385.9fc1"]]},{"id":"77bcd05a.03735","type":"join","z":"6cfa5596.55c41c","name":"","mode":"custom","build":"object","property":"payload","propertyType":"msg","key":"topic","joiner":"\\n","joinerType":"str","accumulate":false,"timeout":"","count":"2","reduceRight":false,"reduceExp":"","reduceInit":"","reduceInitType":"","reduceFixup":"","x":970,"y":260,"wires":[["df8fb54f.353128"]]},{"id":"df8fb54f.353128","type":"debug","z":"6cfa5596.55c41c","name":"D-Final","active":true,"tosidebar":true,"console":false,"tostatus":false,"complete":"payload","targetType":"msg","x":1150,"y":380,"wires":[]},{"id":"ba72c385.9fc1","type":"debug","z":"6cfa5596.55c41c","name":"D-StdErr","active":true,"tosidebar":true,"console":false,"tostatus":false,"complete":"payload","targetType":"msg","x":900,"y":380,"wires":[]},{"id":"26b16e37.68abb2","type":"debug","z":"6cfa5596.55c41c","name":"D-StdOut","active":true,"tosidebar":true,"console":false,"tostatus":false,"complete":"payload","targetType":"msg","x":880,"y":100,"wires":[]},{"id":"21ac8417.bb3ffc","type":"function","z":"6cfa5596.55c41c","name":"StdOut","func":"msg.topic = \"stdout\"\n\nif (msg.rc.code === 0) {\n    msg.complete = true\n}\nreturn msg;","outputs":1,"noerr":0,"x":680,"y":240,"wires":[["77bcd05a.03735","26b16e37.68abb2"]]},{"id":"dad60ac7.f44488","type":"inject","z":"6cfa5596.55c41c","name":"Success Case","topic":"","payload":"[\"\",\"-l\"]","payloadType":"json","repeat":"","crontab":"","once":false,"onceDelay":0.1,"x":170,"y":300,"wires":[["d111936d.1aaf5"]]}]

A function node on the stdout port of exec node will check the return code. If success, it will set the complete flag. The join node is now configured to wait for 2 messages, but the complete flag will short-circuit that wait and let the message through.

I think my way is cleaner.

I think I agree. I didn't read your message completely before I posted mine. Simpler without the join node. Thanks again Colin!

Amazingly I just worked out how to do virtually the same thing for myself a couple of days ago.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.