Array value transfer

Hello everyone!
I'm a little stuck. I have the following dynamic array. I would like to pass the value of these one after the other to a variable, which I would pass to the BigExec node. I think the output of this node can be used so that if the previous value transfer has taken place, the next one comes only after that. How to?
(sorry for my poor english still)

array
array2

You can use a Split node to split an array into a number of messages, one element of the array in each message.

Yes, we have that.

var arr = str.split('=').join(',').split(':').join(',').split(',');
msg.$1 = (arr);

The input text is very long
I don't have a problem with the split

I have a variable size text that I need to break up into shorter units. I get this in an array. The contents of this should be passed one after the other to the $1 argument. So not the arrays together, but the values ​​obtained in the array.

This is the .sh script. Therein lies the $1 argument. It should not be more than 200 characters, because google TTS does not work. That's why I split the input text into an array. And I should pass the values ​​of this array to bigexec one after the other, when the previous one has already run.

say() { local IFS=+;/usr/bin/mplayer cache: 256 cacheMin: 1 -ao alsa -really-quiet -noconsolecontrols "http://translate.google.com/translate_tts?ie=UTF-8&client=tw-ob&q=$1&tl=hu";}

Google's TTS service has an input string length limit of 200 characters. If the text to be translated is greater than 200 characters, it will be intelligently broken into segments and the output will consist of an array of URLs linking to sequential audio files encoding each segment.

That way it will be clear what I want :crazy_face:

As I said, if you have an array, you can feed that through a Split node (note, the node, not javascript function) and it will send each element in a separate message.

Okay, it won't work :pensive:

What won't work? If you copy your array into msg.payload and feed it into a Split node then it will send each element as a separate message.

I think the problem is that it would be sent all at once, but I think what he’s looking for is some sort of ‘await’ for the call of the shell script.

The exec node returns an error code once finished. I would use that to trigger sh script one by one; only if the exec node of the previous piece returned 0.

Or, if I understand your excerpt above correctly, you send longer text and get an array in return. That array is said to contain consecutive audio files, which you can download and play in a row. Probably a better solution unless you have a constant stream of text you want to be spoken.

You're almost right, but I don't get sound files, but strings in the payload array, I read this with the mentioned script. And yes, there should be some kind of "wait".

Google's TTS service has an input string length limit of 200 characters. If the text to be translated is greater than 200 characters, it will be intelligently broken into segments and the output will consist of an array of URLs linking to sequential audio files encoding each segment.

I would do something like this...
Get node-red-contrib-queue-gate.

created test.sh

#!/bin/bash

sleep 1
ls

[{"id":"25e1eef6ce4b9424","type":"inject","z":"f6f2187d.f17ca8","name":"","props":[{"p":"payload"}],"repeat":"","crontab":"","once":false,"onceDelay":0.1,"topic":"","payload":"[\"URL1\", \"URL2\", \"URL3\"]","payloadType":"json","x":370,"y":140,"wires":[["3af88b16ca5d445c","3ae6cfe8c5979ce1"]]},{"id":"3af88b16ca5d445c","type":"split","z":"f6f2187d.f17ca8","name":"","splt":"\\n","spltType":"str","arraySplt":1,"arraySpltType":"len","stream":false,"addname":"","x":610,"y":140,"wires":[["2bea552356cce723"]]},{"id":"672ad94b70a25389","type":"debug","z":"f6f2187d.f17ca8","name":"","active":true,"tosidebar":true,"console":false,"tostatus":false,"complete":"payload","targetType":"msg","statusVal":"","statusType":"auto","x":1250,"y":140,"wires":[]},{"id":"2bea552356cce723","type":"q-gate","z":"f6f2187d.f17ca8","name":"","controlTopic":"control","defaultState":"queueing","openCmd":"open","closeCmd":"close","toggleCmd":"toggle","queueCmd":"queue","defaultCmd":"default","triggerCmd":"trigger","flushCmd":"flush","resetCmd":"reset","peekCmd":"peek","dropCmd":"drop","statusCmd":"status","maxQueueLength":"100","keepNewest":false,"qToggle":false,"persist":false,"storeName":"memory","x":790,"y":140,"wires":[["a081eb098238d3ef"]]},{"id":"07de6661d34301cf","type":"change","z":"f6f2187d.f17ca8","name":"","rules":[{"t":"set","p":"topic","pt":"msg","to":"control","tot":"str"},{"t":"set","p":"payload","pt":"msg","to":"trigger","tot":"str"}],"action":"","property":"","from":"","to":"","reg":false,"x":830,"y":240,"wires":[["2bea552356cce723"]]},{"id":"a081eb098238d3ef","type":"exec","z":"f6f2187d.f17ca8","command":"./test.sh","addpay":"","append":"","useSpawn":"false","timer":"","winHide":false,"oldrc":false,"name":"","x":1020,"y":140,"wires":[["672ad94b70a25389"],["98fddd6acd50f256"],["c5d08422d302e3e6","07de6661d34301cf"]]},{"id":"98fddd6acd50f256","type":"debug","z":"f6f2187d.f17ca8","name":"","active":false,"tosidebar":true,"console":false,"tostatus":false,"complete":"payload","targetType":"msg","statusVal":"","statusType":"auto","x":1250,"y":180,"wires":[]},{"id":"c5d08422d302e3e6","type":"debug","z":"f6f2187d.f17ca8","name":"","active":false,"tosidebar":true,"console":false,"tostatus":false,"complete":"payload","targetType":"msg","statusVal":"","statusType":"auto","x":1250,"y":220,"wires":[]},{"id":"3ae6cfe8c5979ce1","type":"delay","z":"f6f2187d.f17ca8","name":"","pauseType":"delay","timeout":"100","timeoutUnits":"milliseconds","rate":"1","nbRateUnits":"1","rateUnits":"second","randomFirst":"1","randomLast":"5","randomUnits":"seconds","drop":false,"allowrate":false,"outputs":1,"x":580,"y":240,"wires":[["07de6661d34301cf"]]},{"id":"c0ca14cbc9332aed","type":"comment","z":"f6f2187d.f17ca8","name":"https://discourse.nodered.org/t/array-value-transfer/78735","info":"https://discourse.nodered.org/t/array-value-transfer/78735","x":830,"y":60,"wires":[]},{"id":"9b5222b0026d11c0","type":"comment","z":"f6f2187d.f17ca8","name":"test.sh","info":"#!/bin/bash\n\nsleep 1\nls\n","x":750,"y":320,"wires":[]}]

There are probably better ways, but this should do the trick.

Possible enhancements:

  • better trigger; the 100ms might not be the best solution depending on speed of system and load. Will not work for constant stream of strings.
  • Include error handling; this is missing. If the test.sh fails, process will stop.

Arguably this is the best way to ensure one operation completes before the next starts, it doesn't use any contrib nodes and has been extensively tested and I am confident it does not suffer from any race conditions.

Open the Comment node to see detailed instructions on using it.

[{"id":"b6630ded2db7d680","type":"inject","z":"bdd7be38.d3b55","name":"","props":[{"p":"payload"},{"p":"topic","vt":"str"}],"repeat":"","crontab":"","once":false,"onceDelay":0.1,"topic":"","payload":"","payloadType":"date","x":140,"y":840,"wires":[["ed63ee4225312b40"]]},{"id":"ed63ee4225312b40","type":"delay","z":"bdd7be38.d3b55","name":"Queue","pauseType":"rate","timeout":"5","timeoutUnits":"seconds","rate":"1","nbRateUnits":"1","rateUnits":"minute","randomFirst":"1","randomLast":"5","randomUnits":"seconds","drop":false,"allowrate":false,"outputs":1,"x":310,"y":840,"wires":[["d4d479e614e82a49","7eb760e019b512dc"]]},{"id":"a82c03c3d34f683c","type":"delay","z":"bdd7be38.d3b55","name":"Some more stuff to do","pauseType":"delay","timeout":"5","timeoutUnits":"seconds","rate":"1","nbRateUnits":"1","rateUnits":"second","randomFirst":"1","randomLast":"5","randomUnits":"seconds","drop":false,"allowrate":false,"outputs":1,"x":800,"y":840,"wires":[["7c6253e5d34769ac","b23cea1074943d4d"]]},{"id":"2128a855234c1016","type":"link in","z":"bdd7be38.d3b55","name":"link in 1","links":["7c6253e5d34769ac"],"x":95,"y":920,"wires":[["3a9faf0a95b4a9bb"]]},{"id":"7c6253e5d34769ac","type":"link out","z":"bdd7be38.d3b55","name":"link out 1","mode":"link","links":["2128a855234c1016"],"x":665,"y":920,"wires":[]},{"id":"b23cea1074943d4d","type":"debug","z":"bdd7be38.d3b55","name":"OUT","active":true,"tosidebar":true,"console":false,"tostatus":false,"complete":"payload","targetType":"msg","statusVal":"","statusType":"auto","x":670,"y":760,"wires":[]},{"id":"d4d479e614e82a49","type":"debug","z":"bdd7be38.d3b55","name":"IN","active":true,"tosidebar":true,"console":false,"tostatus":false,"complete":"payload","targetType":"msg","statusVal":"","statusType":"auto","x":470,"y":760,"wires":[]},{"id":"3a9faf0a95b4a9bb","type":"function","z":"bdd7be38.d3b55","name":"Flush","func":"return {flush: 1}","outputs":1,"noerr":0,"initialize":"","finalize":"","libs":[],"x":190,"y":920,"wires":[["ed63ee4225312b40"]]},{"id":"7eb760e019b512dc","type":"function","z":"bdd7be38.d3b55","name":"Some functions to be performed","func":"\nreturn msg;","outputs":1,"noerr":0,"initialize":"","finalize":"","libs":[],"x":550,"y":840,"wires":[["a82c03c3d34f683c"]]},{"id":"e35f37deeae94860","type":"comment","z":"bdd7be38.d3b55","name":"Set the queue timeout to larger than you ever expect the process to take","info":"OK, here is a simple flow which allows a sequence of nodes to be protected so that only one message is allowed in at a time. It uses a Delay node in Rate Limit mode to queue them, but releases them, using the Flush mechanism, as soon as the previous one is complete. Set the timeout in the delay node to a value greater than the maximum time you expect it ever to take. If for some reason the flow locks up (a message fails to indicate completion) then the next message will be released after that time.\nMake sure that you trap any errors and feed back to the Flush node when you have handled the error. Also make sure only one message is fed back for each one in, even in the case of errors.","x":270,"y":720,"wires":[]}]
1 Like

Not sure if it is the best way, but I have to say, it is a better way than I provided. :+1:

Just to make it clear, it was not my invention, it was @dceejay, I think, that originally suggested the technique.

1 Like

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.