Opinions please - delay - queue option

Hi all,

I was just musing a possible enhancement to the delay node. To allow it to be a bit more of a controllable queue (like some of the dedicated queue nodes out there. Two possible ideas crossed my mind.

  1. To allow the existing msg.flush option to take a numeric parameter (currently behaviour is it just exists) - and if so to release that many messages from the queue (up to depth of queue of course), so msg.flush = 1 would just release the next message etc... If this was combined with a timeout of (say 20 days) then effectively that becomes "until released", but still allows a max timeout if required.

  2. To add a new mode to the delay - sub menu to "Hold messages until msg.release" - so setting a large time would not be necessary but then you have to always ensure you release them. (or flush or reset)

I think I prefer option 1 as it's a "simple" enhancement and uses an existing property (though would possibly break someone if they really used a number as the value for msg.flush... ), and could/would work across all the other delay modes that accept flush, vs a specific new mode.

But as per the title - opinions please....

One of the scenarios I've needed a few times, and it has been a bit more complicated than it should be is the 'only allow one message past, and then block until I say so'.

In other words, with scenario 1 as you describe, consider the following case:

  • a message arrives at the Delay node. It gets queued up.
  • something decides to check the delay node (msg.flush=1) to get the next message. The Delay node then releases that message.
  • At some point, that something decides it is ready for the next message, so pokes the Delay node again. But nothing is currently queued, so nothing is released.

In that case, the something would now need to periodically poke the node to see if there's any work queued up. That is non-trivial logic to create.

I guess it's more of a 'Gate' then a "Queue' semantic that I'm thinking of - so maybe this is out of scope (at the moment... although, if we're adding one, the other could be considered).

So yes, I think there is value in exploring these ideas - and consider real applications for how those modes work in practice.

currently - (in my head) - would that not just work (option 1) if you set the delay to be your "timeout" period which is hopefully longer than the default processing time for the something. So the something would empty the queue - ask for something - get nothing and "stall" - next message arrives - after the timer timeout it gets released anyway - thus kicking off something again.

So I'm still preferring option 1 - and I'd like to get it into Node-RED v2 if possible. The intent to that the base functionality will remain as-is but if msg.flush is supplied with a numeric value then only (or up to) that many items would be released immediately from the queue.

I'd also like to change the status messages so that instead of being mostly blank - or "flushed" or "reset" text - they change to reflect the number of items in the queue. I suspect that may be more useful if you are indeed picking things off the queue as above... Hence this would be a "breaking change" so I want to get it into 2.0 if possible...

Any more thoughts ?

1 Like

That sounds reasonable as an incremental enhancement at this point of the 2.0 release.

I still think there are possible 'bigger' ideas to consider. Whilst you could sort-of achieve the scenario I outlined, it wouldn't be an ideal approach. It means there is a delay applied when perhaps you don't want it applied. But lets save that for another day.

Well - when we are in rate limit mode - (which is what I think you could use for above scenario) - then if we do get to an empty state then we do clear the interval timer - so the next message to arrive should go straight through... (and then kick off the timer again - until flushed)

Initial Pull Request raised - Delay node updates to added numeric flush control for releasing things from queue by dceejay · Pull Request #3059 · node-red/node-red · GitHub

I assisted with the addition of the peek, and drop features of node-red-contrib-simple-gate, which provide a similar capability. Unfortunately when I came to use the feature to implement a guaranteed delivery flow I came upon race conditions that meant it was virtually impossible to use the node, and I had to resort to using a function node. I would be very interested to see a flow that makes use of the feature proposed here as I suspect the use cases may be rather limited. Or perhaps my use case was particularly difficult, or I just didn't manage to see a simple solution using the node.
This is the flow I ended up with for my requirement. Guaranteed delivery of data (upload, email etc) across a network (flow) - Node-RED

1 Like

Well it should help with the queuing part - but just off the top of my head I can see the difficulty is in the failure side - in that you need to be able to repeat the last message if it failed. The current node - once it has sent a message - no longer has any knowledge of it - so retain retry would need to occur somewhere - however on success it should be fine.

Ah yes, you aren't providing the peek feature are you.

indeed no - this is still primarily a delay node/rate limit node.

though there is potential for "a cunning plan"...

@colin - The way the existing rate limit mode works is by pushing messages into an array that they then get shifted off on a timer tick. So... it wouldn't be hard to add a flag to say instead of pushing to end of array - push to front (ie unshift) - so that message suddenly becomes the next to be sent. - And of course this could be combined with flush:1 to send it immediately. So in the case of an error then a handler could jam the failed message back into the front of the queue for a retry.
I'm thinking the property could be msg.lifo (as in Last In First Out) as indeed it could be used to create a LIFO queue as well..

Thoughts ?

(hmm actually only 3 lines of code... :slight_smile: )

1 Like

I'm worried this is becoming a slippery slope of feature creep this close to the 2.0 release.

If you are expanding the functionality I would like to spend the proper time to consider the full scope rather than piecemeal solutions.

That means a design that shows how these proposals can be used to solve real scenarios.

OK, I won't add to the existing PR. I do want to get that in if possible due to the status changes, but this extra step can wait.

I should mention a similar

that may affect the queue-gate node. I have been discussing new features that would make the node into a more advanced queueing system. These would include at least (in terms of customer queuing at retail or banking establishments) reneging, where the customer leaves the queue before receiving service) and balking (UK, sometimes 'baulking'), where the customer arrives at the queue but refuses to join. I'll start a separate thread to ask about user interest and implementation suggestions, but I wanted to mention it here to get thoughts on what would be healthy overlap of function between queue-gate and the core delay node.

1 Like

Interesting - what would the use cases for those be ? How would an item have enough "self awareness" to perform those actions ? In the case of baulking, is that something the queue has to deal with r is it just outside of the queue as it hasn't actually been accepted yet... ?

I don't have a list of use cases yet.

The suggestion for a general reneging feature came from a user who wants to be able to drop phone calls from a wait queue when the caller hangs up. This would be done with a control message that includes a key-value pair that would have to match a property of the queued message. Another form of reneging would be time based, as already implemented in the time-to-live feature of the simple-message-queue node. This requires the message to have a property such as msg.ttl that specifies the length of time the node should hold it before deleting it from the queue. The q-gate node currently allows the message at the head of the queue to renege when the drop command is received, although as @colin says, the peek/drop command pair was added for a special purpose.

Balking could be used in flows that provide load balancing or message prioritization. The message would need a property such as msg.queueLimit that would tell the node not to accept it if there are more than the specified number of messages already in the queue. Clearly, the second output described in the GitHub issue would be important here.

I was always taught to focus on use cases and user requirements, but q-gate has reached its current state by making piecemeal enhancements in response to user requests. It is now at a point where there is already some friction between existing features, and it feels like adding new ones will become increasingly difficult. Whether or not I end up making major changes, I thought it would be worthwhile to step back and look at all the possibilities.

Hi,
interesting. I could perhaps see a use for msg.ttl so a message could self destruct if sitting around too long - but I can see that needing to create lots more "management" info... as the system is then bound to want to know things like how many messages were dropped and when (was it a burst of inputs, or the queue being slow, that caused the pileup ?)

Exactly right. That's why the discussion of this issue agreed on the need for a second (possibly optional) output. That output would send any message that fails to enter the queue or leaves it by any method other than standard dequeueing. Those messages would be given an additional property (e.g., msg.action) indicating what had occurred. (I identified six such actions, and I hope the list is complete.) Rather than have the node itself compile or act on "management" information, it would be down to the user to analyze the messages from the second output and deal with them appropriately.