Delay node set to rate limit omits messages unexpectedly in case of bursts

I just had an issue with the delay node that caught my attention, because it seems rather unintuitive.
Could somebody confirm and maybe even propose a solution how to implement solution to allow bursts.

Delay node can do rate limiting. But it cannot handle bursts.
It seems whatever I set, the set rate limit is calculated as (1 msg / x sec).

Test flow:

[{"id":"9c74974b13af27c0","type":"inject","z":"e33682730616c2c8","name":"","props":[{"p":"payload"},{"p":"topic","vt":"str"}],"repeat":"","crontab":"","once":false,"onceDelay":0.1,"topic":"","payload":"","payloadType":"date","x":600,"y":100,"wires":[["c219e59f8a28360f","2164b5554c103e02"]]},{"id":"658ecb71f38d1575","type":"debug","z":"e33682730616c2c8","name":"ok","active":true,"tosidebar":true,"console":false,"tostatus":false,"complete":"payload","targetType":"msg","statusVal":"","statusType":"auto","x":1230,"y":80,"wires":[]},{"id":"5ba93a1744a8a0f2","type":"delay","z":"e33682730616c2c8","name":"rate limit 10 msg/m","pauseType":"rate","timeout":"5","timeoutUnits":"seconds","rate":"10","nbRateUnits":"1","rateUnits":"minute","randomFirst":"1","randomLast":"5","randomUnits":"seconds","drop":true,"allowrate":false,"outputs":2,"x":1030,"y":100,"wires":[["658ecb71f38d1575"],["1728ec1f7f03c6ae"]]},{"id":"1728ec1f7f03c6ae","type":"debug","z":"e33682730616c2c8","name":"tooo much","active":true,"tosidebar":true,"console":false,"tostatus":false,"complete":"payload","targetType":"msg","statusVal":"","statusType":"auto","x":1250,"y":120,"wires":[]},{"id":"c219e59f8a28360f","type":"trigger","z":"e33682730616c2c8","name":"","op1":"trigger","op2":"5.99 sec past","op1type":"str","op2type":"str","duration":"5990","extend":false,"overrideDelay":false,"units":"ms","reset":"","bytopic":"all","topic":"topic","outputs":1,"x":780,"y":100,"wires":[["5ba93a1744a8a0f2"]]},{"id":"2164b5554c103e02","type":"trigger","z":"e33682730616c2c8","name":"","op1":"trigger","op2":"6 sec past","op1type":"str","op2type":"str","duration":"6000","extend":false,"overrideDelay":false,"units":"ms","reset":"","bytopic":"all","topic":"topic","outputs":2,"x":780,"y":160,"wires":[[],["5ba93a1744a8a0f2"]]}]

Setup of delay node

Setup seems to impose a rate limit at 1 msg every 6 sec.

Pushing the button triggers 1 msg immediately which is passed through the delay node.
2nd msg after 5.99s is treated as above the rate and hence is limited.
3rd msg after 6s is not affected by rate limit.

I would have expected that within a sliding window of 1 minute always 10 msg would be forwarded without rate limiting, hence a short burst of msg would be ok, but a contstant high msg rate would be limited.

Is there an easy fix?

Solved. Just in case anybody is looking for a solution later:

demo flow:

[{"id":"4a7b66ac66eed842","type":"inject","z":"e33682730616c2c8","name":"","props":[{"p":"payload"},{"p":"topic","vt":"str"}],"repeat":"","crontab":"","once":false,"onceDelay":0.1,"topic":"","payload":"","payloadType":"date","x":560,"y":80,"wires":[["6edc9558d24bfba5","6c16d58c4c30a02c"]]},{"id":"8824a46e7097f6b6","type":"debug","z":"e33682730616c2c8","name":"ok","active":true,"tosidebar":true,"console":false,"tostatus":false,"complete":"payload","targetType":"msg","statusVal":"","statusType":"auto","x":1190,"y":60,"wires":[]},{"id":"8d79389959eafd39","type":"delay","z":"e33682730616c2c8","name":"rate limit 10 msg/m","pauseType":"rate","timeout":"5","timeoutUnits":"seconds","rate":"10","nbRateUnits":"1","rateUnits":"minute","randomFirst":"1","randomLast":"5","randomUnits":"seconds","drop":true,"allowrate":false,"outputs":2,"x":990,"y":80,"wires":[["8824a46e7097f6b6"],["5a490e1172ca48e4"]]},{"id":"5a490e1172ca48e4","type":"debug","z":"e33682730616c2c8","name":"tooo much","active":true,"tosidebar":true,"console":false,"tostatus":false,"complete":"payload","targetType":"msg","statusVal":"","statusType":"auto","x":1210,"y":100,"wires":[]},{"id":"6edc9558d24bfba5","type":"trigger","z":"e33682730616c2c8","name":"","op1":"trigger","op2":"5.99 sec past","op1type":"str","op2type":"str","duration":"5990","extend":false,"overrideDelay":false,"units":"ms","reset":"","bytopic":"all","topic":"topic","outputs":1,"x":740,"y":80,"wires":[["a90f1a228b49ad13"]]},{"id":"6c16d58c4c30a02c","type":"trigger","z":"e33682730616c2c8","name":"","op1":"trigger","op2":"6 sec past","op1type":"str","op2type":"str","duration":"6000","extend":false,"overrideDelay":false,"units":"ms","reset":"","bytopic":"all","topic":"topic","outputs":2,"x":740,"y":140,"wires":[[],["a90f1a228b49ad13"]]},{"id":"e062e54bfb81f57e","type":"function","z":"e33682730616c2c8","name":"rate limit 10 msg/m","func":"const NoOfMsg = 10;\nconst WindowInMilliSec = 60000;\nconst AddCurrentCount = true;\n\nfunction addTimeout() {\n    setTimeout(() => {\n        let currentCount = context.get('msgcounter') || 0;\n        context.set('msgcounter', currentCount -1 );\n    }, WindowInMilliSec);\n}\n\nlet currentCount = context.get('msgcounter') || 0;\nif (currentCount < NoOfMsg) {\n    context.set('msgcounter', currentCount + 1);\n    addTimeout();\n    if (AddCurrentCount) msg.CurrentCount = currentCount + 1;\n    return [msg, null];\n} else {\n    if (AddCurrentCount) msg.CurrentCount = currentCount;\n    return [null, msg];\n}\n","outputs":2,"noerr":0,"initialize":"","finalize":"","libs":[],"x":990,"y":180,"wires":[["1672a7135b45e6c2"],["5b42fc12c09ea3f2"]]},{"id":"1672a7135b45e6c2","type":"debug","z":"e33682730616c2c8","name":"FUNCTION ok","active":true,"tosidebar":true,"console":false,"tostatus":false,"complete":"true","targetType":"full","statusVal":"","statusType":"auto","x":1220,"y":160,"wires":[]},{"id":"5b42fc12c09ea3f2","type":"debug","z":"e33682730616c2c8","name":"FUNCTION tooo much","active":true,"tosidebar":true,"console":false,"tostatus":false,"complete":"true","targetType":"full","statusVal":"","statusType":"auto","x":1240,"y":200,"wires":[]},{"id":"3079243051dd4a07","type":"inject","z":"e33682730616c2c8","name":"","props":[{"p":"payload"},{"p":"topic","vt":"str"}],"repeat":"","crontab":"","once":false,"onceDelay":0.1,"topic":"","payload":"","payloadType":"date","x":560,"y":180,"wires":[["e062e54bfb81f57e"]]},{"id":"a90f1a228b49ad13","type":"junction","z":"e33682730616c2c8","x":860,"y":120,"wires":[["e062e54bfb81f57e","8d79389959eafd39"]]}]

function node:

const NoOfMsg = 10;
const WindowInMilliSec = 60000;
const AddCurrentCount = true;

function addTimeout() {
    setTimeout(() => {
        let currentCount = context.get('msgcounter') || 0;
        context.set('msgcounter', currentCount -1 );
    }, WindowInMilliSec);
}

let currentCount = context.get('msgcounter') || 0;
if (currentCount < NoOfMsg) {
    context.set('msgcounter', currentCount + 1);
    addTimeout();
    if (AddCurrentCount) msg.CurrentCount = currentCount + 1;
    return [msg, null];
} else {
    if (AddCurrentCount) msg.CurrentCount = currentCount;
    return [null, msg];
}

Function has a counter that is increased for every message sent and decreased after time has passed. If counter is > rate limit, the message is sent to the second output.

A similar question has been raised before, without a final answer. I think what you see is the result of how the delay node is designed to interpret "rate limit." A rate of (x messages per y minutes) is interpreted as 1 message every y/x minutes. Regardless of how many messages are received, they will not be transmitted less than y/x minutes apart. You can select x and y however you like, but as long as the ratio is constant, the result will be the same. There are other ways you could interpret "rate limit," but this one was chosen, and it is as reasonable as any other.

Thank you for the linked thread. Interesting indeed.

I can see how one would find the current implementation also "correct", apparently I thought different.
But as far as I am concerned, we can close the case at this point, because I have an implementation that is fairly simple and I tested with a msg rate of 1msg/ms and it works.

On the other hand, if developers would like to add it to the std delay node, why not. Implementation seems not that complicated (see function node)... if it is added as a new option to not interfere with existing flows, why not.

I think the basic use for rate limiting is to give downstream nodes time to process a message before the next one arrives. Allowing messages to bunch up in exchange for blocks of dead time does not really do the job -- unless the bunches are queued up, which is exactly what the node does now. Your use case may be unusual, and you have a working solution. Adding an option and trying to explain it might just confuse users.

No, because that is taken care of by node red. That is the basic of node red, forwarding messages to the next node and ensuring that no message is lost. And if they pile up in the queue of a node it must not be a problem.

I do not consider the use case unusual and I am sure that other node red users would understand the difference between both approaches. Besides, we allow function nodes with JS albeit not every user can program JS.

The more I think about it, the more counterintuitive I find the current implementation (albeit I red somewhere that it is documented correctly).
The moment I set the number of messages to a value greater 1 I expect that this number of messages will go through in the defined time. But changing the behaviour of the existing delay node now is impossible of course. Existing flows must not behave differently at a sudden.

To make the point that my use case is not that special, imagine all the cases in which you expect your messages to go through uninterrupted normally and only rate limited or skipped if there is an unusual amount of messages. Apparently @sortidocorps had a similar use case not so long ago.

Examples:

  1. Receiving state changes on sensors. Imagine state changes are rare and you would like to get them all as soon as possible. Imagine you want every state change, but if there is more than 20 in a row, you assume something strange is going on and to protect the downstream, the next 5 min should be discarded. The current implementation would make 20 msg/5min = 1 msg/15sec which has a completely different meaning.

  2. Imagine an implementation of a low volume API or a chatbot that has to react to user interactions. Rate limiting could be used to protect against wrongdoing by spamming the API/bot (regardless of malicious intent or simply wrong usage). Again you want all msg to be processed as fast as possible, but only if there is a longer burst you would want the rate limited.

Frankly, I have some flows that were using rate limiting with the number of messages >1 and a larger time. As far as I can see most cases are there to protect the downstream flow from unintended larger than planned message volume. I thank that is not that unusual.
Ans I also think that if you would ask people a fair amount would lean towards my understanding, that e.g. "10 msg every minute" means something different than "1 msg every 6 seconds".

You cannot change the behaviour now, but I ask that you recognise the difference and the potential usefulness of both interpretations.

I fully understand your use case. You are not the first to describe something similar or to try to deal with it using the delay node, so I won't continue the discussion except to respond to a few of your comments.

No, that's called a memory leak. That is why the nodeMessageBufferMaxLength parameter exists and can be set in the settings.js file.

That value is the maximum number of messages that can pass in the specified time. If messages continue to arrive at a rate greater than the maximum, some will be dropped, sent elsewhere, or cause the queue to grow without limit, crashing the system.

Sending commands to actuators. Imagine response is slow and commands must be sent at least some time apart.

Of course. The question is which definition we should use for "rate limiting." It is the difference between average and instantaneous rate. The node currently limits the instantaneous rate by ensuring that no two messages pass at less than a certain interval. Limiting average rate would involve allowing messages arbitrarily close together and has some difficulties as to exactly how the interval would be defined.

We agree, as I said in my post. There are several ways to do it, including the two we are discussing here. It might be useful to have a "rate limiter" node with several modes of operation, rather than add features to the delay node.

Used the chance to create a new node :slight_smile:

3 Likes

Good work - Might solve some of the repeated discussions about this topic :wink:

Very nice. Thank you for that.

Did you consider using a fixed rather than a sliding window? For x messages in y minutes, you could give the node a quota of x messages at deployment and start a timer that would renew the quota every y minutes. Each time the node passed a message it would decrement the quota, until at zero it would reject messages or send them to a second output and wait for the quota to be renewed. This might change the result for some message sequences, but I haven't found an example. The fixed window could be useful for synchronization with other processes.

As far as I understand it thats how it works now ?

I could be mistaken, but as I read the code, it starts a timer each time a message is received, so the window slides along the input stream instead of having a fixed end time.

Humm subtle difference I guess :thinking:, I think I would need to see an example of each to understand what difference it would make in the real world :upside_down_face:

Perhaps @ cameo69 could add it for you :smiley:

@drmibell, is that what you mean?
Hope it shows the difference.

1 Like

Almost. In my suggestion, the second window would begin as soon as first one closed. So, the output still might depend on the exact timing of the "not sent" messages. Still, I'm surprised you could find a good example so quickly. Whether it would be worth adding an option to the ratelimit node is another question.

I'm thinking about some interesting applications of your node, but it will be a while before I can try them out. I'll be back when I have.

I understand what you mean, however, I honestly cannot think of any meaningful application for that (which does not mean that it is not useful, I simply do not see it).

The thing is, that it would be an arbitrary slice of time soley dependant on the first message to arrive or on the time of start of the node (depending how you define the starting point). A metronome comes to mind...

What I actually would like to add is the ability to queue messages that are sent to the second exit right now and dispatch them whenever they would no longer violate the set rate-limit. Then it would be (in my opinion) a viable alternative to the current default node. It would rate limit, allow bursts (within the limit) and allow to buffer, hence not loosing messages.

that queuing of messages is sort of what the existing node does when in rate limit - for each topic - send all topics mode ... that will keep the latest msg on each different topic and then release them all at the end of the time period. If multiple msgs arrive on the same topic it only keeps the latest - so basically you get the latest set of values across all topics. Not exactly what you want as you want it for any arbitrary msg - but this does burst them.

I did not know that feature, however it is IMHO quite counterintuitive.

If I use setting A (all messages) it gives me the first message within 1 sec.
But if I choose to do it by topic (setting B), only the last message is forwarded.
Not very intuitive... good to know, but not straight forward.

Setting A

Setting B

well again it's down to how we interpret rate... if applied to all messages then you may as well send one straight away and start the interval timer at that point. If applied across multiple topics then it is more logical to wait otherwise you always just send one right away then a batch next time round... seems more sensible to me to wait to give others a chance to arrive (as by selecting by topic mode you are implying there will be more than one topic to wait for)

Oh no its started again :rofl: :rofl: :rofl:

2 Likes