Delay node, with rate limit: how to check if every single message reached "the end" of the flow

Hi all,
it's very hard for me to ask this question because I understand that's a really complex scenario.

But... let's try.

I've got a split & delay with rate limit that sends every 10sec. a message.

For each message I need (and I can't do that in a different way) to call an API, then do a little process and so on.

My problem is that, obviously, API call is asyncronous and so I've got a different amount of time at response.. Sometimes this thime is bigger tha 10 seconds, so the the next message is passed.

That's not a problem, if not I'll need to be sure tha every single message is processed.
So i checked msg.parts to check when the loop arrives to the end but, for the previous reason, it could happen that a message is not processed "in time" and the flow pass over to next parts, without doing the correct elaboration.

So my question is: how to check at every iteration if ALL MESSAGES are be processed?

Thank you all.

Do you mean that, ideally, you want to not release the next message until the previous one has completed? If so then the current beta version of node red (v3.0 beta 4) has a fix to the delay node that allows you to feedback when the flow has completed to release the next one from the delay node. See Wait for payload to finish flow before passing another payload - #33 by Colin for a flow that works in the beta version (it didn't at the time of the post). Set the delay node to rate limit with a large timeout so it only releases messages if a message gets lost. Otherwise the next one will be released as soon as the previous one says it is complete. Note that the message sent back must be an empty message except for msg.flush.

Hi Colin,
yes this way could be a solution.

For me it could be better something that I can check at a specific position of the flow (i.e. with a switch). At the moment, for example, I check something like that:

if (msg.parts.count === (msg.parts.index+1)) {
    node.warn("ESCO")
    msg.exitFlow = true;
}
return msg;

The problem is that I could reach the "last element" but some previuos message could be not processed (because of async calls).

Check what?

Look at my code example.
Something like "x messages of y processed".

That sounds horribly complicated, do you really need that? I can't immediately see an easy way to achieve it.

Yes, sadly I need that.

I was thinkink to use node-red-contrib-loop to do the same thing with more control.

It could be a solution, but for that I need to completely change my flow.

At the moment this could be the best solution imho.

Why do you need that?

It's complicated to explain, but I need to call an API to read some ata related to the message and then call others two APIs that wrote foundamentally write changes on a database (I can't change the backend logic, if not it was okay).

So I need that every message calls the 3 APIs before "exit" the loop and continue with the rest of the flow.

I'm unable to resume that better, sorry. :slight_smile:

I believe there was also a discussion last week about the semaphore node (in the same discussion about modifying the delay node if i recall)

Possibly the sempahore node would be better/give you more control

From what you seem to say

  1. A message comes in - you then need to use the contents of that message to call an API, once that API returns you need to call another two APIs

  2. Process the next message (that may have arrived before 1) has finished - so you need to queue them until you get to them

Question: - How do you handle it if any of the API calls fail - what is then meant to happen ?
Are the messages still in the queue valid at that point or do they need to be flushed ?

It also appears that GitHub - drmibell/node-red-contrib-queue-gate: A Node-RED node for controlling message flow (with queueing).

Would give you all the capabilities you require

Craig

So what is wrong with using the delay node as I suggested, or semaphore nodes to achieve the same thing? If it is that you want to run the three APIs in parallel to increase the throughput then put a delay node loop around each one, which will stop messages overtaking each other as they go through.

Unfortunately it is not simple (maybe not possible) to use that node for this purpose, even though it was intended to be used in that way. I was unable to construct a simple flow using it that did not suffer from problems due to race conditions, when messages arrived at the front at inopportune moments.

So is there a problem with the node (as it sounded perfect for this role) or the OPs requirements are weird and not inline with how it was envisaged to work ?

Craig

There isn't a problem with the node, in the sense that it does do what the docs say it should do.
The problem using it for the situation here relates, if I remember correctly, to handling the situation where a new message comes in the front just after the previous one has been has been completed. At that stage I couldn't see an obvious enhancement to the node that would solve the problem.

Aah - from my quick reading it seemed there was queue management available within the node - i assumed this would allow you to tell the queue when processing had finished and to release the next queued message.

@drmibell - could you comment on this as Colin has a lot more knowledge than me and this appears to be your baby ?

Craig

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.