Securing a single use resource

My flow has a single use resource, a modem on a serial port.

From time to time, something triggers the flow and it needs to send a message on the modem.
Now, it could happen that I get two triggers right after each other, which will start hammering the modem from two sides, obviously not good.

In front of the serial port I have a node-red function with a piece of javascript. So all I need is to set a lock in the beginning and clear it when done.
It could be done with a node variable, but it seems to lack an atomic test-and-set method.

Suggestions?

Welcome to the forum @mogul

How do you know when it is done?

It pulls a sequence from a database. Runs this sequence with a for loop. When last item has finished it is done and the modem can be reused for another sequence.

This flow forces messages to be handled sequentially, not allowing the next one through till the previous one indicates that it is complete. It uses a Delay node in a mode where messages can be released on demand. It has been well tested over several years.

Open the Comment node to see detailed instructions on how to use it.

[{"id":"b6630ded2db7d680","type":"inject","z":"bdd7be38.d3b55","name":"","props":[{"p":"payload"},{"p":"topic","vt":"str"}],"repeat":"","crontab":"","once":false,"onceDelay":0.1,"topic":"","payload":"","payloadType":"date","x":140,"y":480,"wires":[["ed63ee4225312b40"]]},{"id":"ed63ee4225312b40","type":"delay","z":"bdd7be38.d3b55","name":"Queue","pauseType":"rate","timeout":"5","timeoutUnits":"seconds","rate":"1","nbRateUnits":"1","rateUnits":"minute","randomFirst":"1","randomLast":"5","randomUnits":"seconds","drop":false,"allowrate":false,"outputs":1,"x":310,"y":480,"wires":[["d4d479e614e82a49","7eb760e019b512dc"]]},{"id":"a82c03c3d34f683c","type":"delay","z":"bdd7be38.d3b55","name":"Some more stuff to do","pauseType":"delay","timeout":"5","timeoutUnits":"seconds","rate":"1","nbRateUnits":"1","rateUnits":"second","randomFirst":"1","randomLast":"5","randomUnits":"seconds","drop":false,"allowrate":false,"outputs":1,"x":800,"y":480,"wires":[["7c6253e5d34769ac","b23cea1074943d4d"]]},{"id":"2128a855234c1016","type":"link in","z":"bdd7be38.d3b55","name":"link in 1","links":["7c6253e5d34769ac"],"x":95,"y":560,"wires":[["3a9faf0a95b4a9bb"]]},{"id":"7c6253e5d34769ac","type":"link out","z":"bdd7be38.d3b55","name":"link out 1","mode":"link","links":["2128a855234c1016"],"x":665,"y":560,"wires":[]},{"id":"b23cea1074943d4d","type":"debug","z":"bdd7be38.d3b55","name":"OUT","active":true,"tosidebar":true,"console":false,"tostatus":false,"complete":"payload","targetType":"msg","statusVal":"","statusType":"auto","x":670,"y":400,"wires":[]},{"id":"d4d479e614e82a49","type":"debug","z":"bdd7be38.d3b55","name":"IN","active":true,"tosidebar":true,"console":false,"tostatus":false,"complete":"payload","targetType":"msg","statusVal":"","statusType":"auto","x":470,"y":400,"wires":[]},{"id":"3a9faf0a95b4a9bb","type":"function","z":"bdd7be38.d3b55","name":"Flush","func":"return {flush: 1}","outputs":1,"noerr":0,"initialize":"","finalize":"","libs":[],"x":190,"y":560,"wires":[["ed63ee4225312b40"]]},{"id":"7eb760e019b512dc","type":"function","z":"bdd7be38.d3b55","name":"Some functions to be performed","func":"\nreturn msg;","outputs":1,"noerr":0,"initialize":"","finalize":"","libs":[],"x":550,"y":480,"wires":[["a82c03c3d34f683c"]]},{"id":"e35f37deeae94860","type":"comment","z":"bdd7be38.d3b55","name":"Set the queue timeout to larger than you ever expect the process to take","info":"This is a simple flow which allows a sequence of nodes to be \nprotected so that only one message is allowed in at a time. \nIt uses a Delay node in Rate Limit mode to queue them, but \nreleases them, using the Flush mechanism, as soon as the \nprevious one is complete. Set the timeout in the delay node to \na value greater than the maximum time you expect it ever to take. \nIf for some reason the flow locks up (a message fails to indicate \ncompletion) then the next message will be released after that time.\n\nMake sure that you trap any errors and feed back to the Flush \nnode when you have handled the error. Also make sure only one \nmessage is fed back for each one in, even in the case of errors.","x":270,"y":360,"wires":[]}]
1 Like

Ah yes, that looks like it could work.

Then you have the warning:

Make sure that you trap any errors and feed back to the Flush 
node when you have handled the error. Also make sure only one 
message is fed back for each one in, even in the case of errors.

What is te best way to do that? Wrap all my code in a try-catch construct in the javascript function or does more elegant ways exists?

I simplified it even further:


and the function code looks like

if (context.get("lock")){
    node.warn("LOCKED");
    return null;
}

node.warn("SET LOCK");
context.set("lock", true);

try
{
    const seq = 
    [
        {cmd:"foo", dly:2000},
        {cmd:"bar", dly:500},
        {sometimes_fail: 0.5},
        {cmd:"baz", dly:1000},

    ];

    const delay = ms => new Promise(resolve => setTimeout(resolve, ms));

    for (const itm of seq){
        if ("cmd" in itm) {
            node.send({payload:itm.cmd});        
        }
        
        if ("dly" in itm) {
            await delay(itm.dly); 
        }

        if ("sometimes_fail" in itm) {
            if (Math.random() > itm.sometimes_fail){
                node.warn("FAIL");
                random_shit(123);
            }
        }
    }
}

finally 
{
    node.warn("RELEASE LOCK");
    context.set("lock", false);
}

Idea is that the "test and set" lock in the beginning, though not being truly atomic, still will be atomic in the real world due to the 1 msg/src rate limit in front of it.

Comments?

I don't know enough about async code in javascript (half the point of node red is that it is 'low code' and usually does not require in depth knowledge of javascript. Did you realise that if a message arrives while the lock is set then that message will be discarded and lost forever?

If you require messages to be queued use the flow I posted.

ah yes, it's actually desired. if requests come in too fast i respond a message "try again later"

And yes i kinda ignored the "low code" memo.

The code you posted does not appear to be doing that.
Is there a reason you do not use the posted flow, so that the user will not have to try again later?

triggering the scheduler will be done from an automated process. if it for some reason fires faster than the jobs can be processed there is no point in queuing up more work. then the only way of mitigating will be to reduce workload.
So I will most likely just have a few monitoring metrics telling me how much idle time is available and then on the other hand, how many requests got dropped.

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.