Anyone know of a backoff node or flow


A few days ago, a flow went bananas and sent loads of messages off to my phone using Pushover (an android notification app) which meant I exceeded my free allocation for the month :frowning:

To prevent this happning in future, I need something that detects that lots of messages are being sent in a short space of time.

i then need it to stop passing the messages on (and discard them) after the rate exceeds threshold until the rate of messages drops below another threshold

But my node-red-fu is letting me down and I’m stuck on where to even start :frowning:


If you check the standard node on palette check the delay node, you can limit the number of messages on a specific amount of time.



Thanks but what I’m after is something a bit more advanced that simple rate limiting.

I’m looking for something that measures the rate of messages and if its less than a threshold - pass them on.

once threshold is exceeded, I want it stop sending messages at all until the rate reduces below the threshold again

Because if its receiving a high rate of messages then that means something is going wrong with the flow/sensor feeding it and I don’t want to send any more notifications out until I’ve fixed the fault and the rate goes back below the threshold


Hi Simon, not sure about the actual answer though node-red-contrib-throttle might help?

However, I would also recommend switching to Telegram rather than Pushover. Not only does this have much higher limits, its bot framework is far more feature rich and Telegram is known for security (hence being kicked out of Russia).


The first thing to do is to define exactly what you mean by the rate? Do you just want to measure the time between individual messages and if that is too short then don’t pass anything on or do you want to measure messages over a particular time and don’t pass them on if too many, or what?


More like the 2nd but stop passing further messages on once rate threshold is exceeded

E.g normal real world peaks might reach 20/minute so set threshold to 40/min and if that is exceeded then stop passing them on until rate drops below 40/min


OK, so we have to work out the algorithm needed. Do you need a rolling one minute count of messages or would it be ok to start a timer, count messages, when at the end of the minute check how many messages sent and if ok then start the next minute, if too many then stop sending and count messages over the next minute but don’t send any of them. If not too many in that minute then start sending again for the minute after. As you can see you end up with a complex algorithm. Are you sure there is not a simpler way of handling this. For example in the fault condition will successive rapid messages be identical? If so the an RBE node might be useful.


Unfortunealy, the RBE node will just have another go at passing messages once its discarded some

I#m looking for something that waits until things have quietened down before attempting to pass messages again.

This “node/flow” I’m looking for is a backstop to stop me exceeding monthly free notifications if something goes haywire

The minuties of how it works is not my major issue at moment (but may become so later on of course)



Tried it out - Ta - but its not what I’m after


The RBE node will continually discard messages on a topic by topic basis provided the payload for each topic does not change.

The minutiae of how it works must be defined in order to write the code to implement the algorithm.


Sorry was getting mixed up with rate -limiting mode of Delay node.

AFAICS the RBE node isn’t suitable for rate limiting (unless messages are the same)


This any good?


So how sensitive does this need to be ? Should just a second message arriving too soon cause it to block ? How long for ? (Just thinking this could be a potential extra mode for the delay node if it can fit the existing model some way)


Unfortunately not.
In normal operation, I might get a few real messages in quick succession and that node would block them.

What I’m after is something that says - whoa - we’ve just got lots of messages coming thru- lets stop sending any on until the rate drops back to a normal level


No - not time between between messages (as 2 real ones could follow in quick sucession) - I think it needs to (as an example) compare time of 1st message and time of the 40th message in the past. If that time < 1 minute then we’ve exceeded a threshold of 40 msgs/minute

Depends on whether it would clutter it up too much

I’m going to prototype the functionality just so we can have something to play around with and adjust and discuss


yes - don’t want to hold “long term” state in the node so 40th (for example) not ideal - so currently sounds better off out than in.


Come up with a prototype flow that just passes msgs if rate is less than 5 messages over a 4 second period

If number of msgs increases (simulated by just pressing inject button to add extra msgs) then the output ceases

As @dceejay says, not prob practible to hold 40 msgs so need a better approach (got an idea for that already as I'm typing this)

[{"id":"24d330a9.4419d","type":"inject","z":"b8cb348c.6744d8","name":"","topic":"","payload":"","payloadType":"date","repeat":"","crontab":"","once":true,"onceDelay":0.1,"x":130,"y":740,"wires":[["9193fe7a.a32b"]]},{"id":"9193fe7a.a32b","type":"change","z":"b8cb348c.6744d8","name":"","rules":[{"t":"set","p":"pastMsgs","pt":"flow","to":"[\t $millis(),\t $millis(),\t $millis(),\t $millis(),\t $millis(),\t $millis(),\t $millis(),\t $millis(),\t $millis(),\t $millis(),\t $millis(),\t $millis(),\t $millis(),\t $millis(),\t $millis(),\t $millis(),\t $millis(),\t $millis(),\t $millis(),\t $millis(),\t $millis(),\t $millis(),\t $millis(),\t $millis(),\t $millis(),\t $millis(),\t $millis(),\t $millis(),\t $millis(),\t $millis(),\t $millis(),\t $millis(),\t $millis(),\t $millis(),\t $millis(),\t $millis(),\t $millis(),\t $millis(),\t $millis(),\t $millis(),\t $millis(),\t $millis(),\t $millis(),\t $millis(),\t $millis(),\t $millis(),\t $millis(),\t $millis(),\t $millis(),\t $millis(),\t $millis(),\t $millis(),\t $millis(),\t $millis(),\t $millis()\t]","tot":"jsonata"},{"t":"set","p":"payload","pt":"msg","to":"pastMsgs","tot":"flow"}],"action":"","property":"","from":"","to":"","reg":false,"x":380,"y":740,"wires":[["3af660a6.c9012"]]},{"id":"3af660a6.c9012","type":"debug","z":"b8cb348c.6744d8","name":"","active":true,"tosidebar":true,"console":false,"tostatus":false,"complete":"false","x":630,"y":740,"wires":[]},{"id":"55f55057.fd8a2","type":"inject","z":"b8cb348c.6744d8","name":"","topic":"repeat 1/sec","payload":"","payloadType":"date","repeat":"1","crontab":"","once":true,"onceDelay":"1","x":150,"y":880,"wires":[["7534777.24a2688"]]},{"id":"6ff51717.c9e9d8","type":"debug","z":"b8cb348c.6744d8","name":"","active":false,"tosidebar":true,"console":false,"tostatus":false,"complete":"false","x":930,"y":940,"wires":[]},{"id":"84b3b1f3.ce318","type":"change","z":"b8cb348c.6744d8","name":"","rules":[{"t":"set","p":"pastMsgs","pt":"flow","to":"$append([$millis()],$flowContext(\"pastMsgs\"))","tot":"jsonata"},{"t":"set","p":"pastMsgs","pt":"flow","to":"$flowContext(\"pastMsgs\")[[0..4]]","tot":"jsonata"},{"t":"set","p":"payload","pt":"msg","to":"pastMsgs","tot":"flow"}],"action":"","property":"","from":"","to":"","reg":false,"x":640,"y":940,"wires":[["6ff51717.c9e9d8","dd144bb9.1ec8c8"]]},{"id":"dd144bb9.1ec8c8","type":"change","z":"b8cb348c.6744d8","name":"","rules":[{"t":"set","p":"timeDiff","pt":"flow","to":"$number($flowContext(\"pastMsgs\")[0] - $flowContext(\"pastMsgs\")[4])","tot":"jsonata"},{"t":"set","p":"payload","pt":"msg","to":"timeDiff","tot":"flow"}],"action":"","property":"","from":"","to":"","reg":false,"x":740,"y":1040,"wires":[["c9f22113.41627"]]},{"id":"86441db6.47522","type":"switch","z":"b8cb348c.6744d8","name":"","property":"timeDiff","propertyType":"flow","rules":[{"t":"gte","v":"4000","vt":"num"}],"checkall":"true","repair":false,"outputs":1,"x":610,"y":860,"wires":[["9aac6e39.e466c"]]},{"id":"9aac6e39.e466c","type":"debug","z":"b8cb348c.6744d8","name":"OUTPUT msgs","active":true,"tosidebar":true,"console":false,"tostatus":false,"complete":"payload","x":940,"y":860,"wires":[]},{"id":"c9f22113.41627","type":"debug","z":"b8cb348c.6744d8","name":"Time over 5 msgs (ms)","active":true,"tosidebar":false,"console":false,"tostatus":true,"complete":"payload","x":960,"y":1040,"wires":[]},{"id":"7534777.24a2688","type":"counter","z":"b8cb348c.6744d8","name":"","init":"0","step":"1","lower":null,"upper":null,"mode":"increment","outputs":2,"x":340,"y":880,"wires":[["86441db6.47522","84b3b1f3.ce318"],[]]},{"id":"2148402c.a3de2","type":"inject","z":"b8cb348c.6744d8","name":"","topic":"","payload":"Extra Message","payloadType":"str","repeat":"","crontab":"","once":false,"onceDelay":0.1,"x":120,"y":940,"wires":[["7534777.24a2688"]]}]



Only stores time of last msg, total of interval times between each msg

So less overhead but is slower to recover than 1st attempt as it uses a moving average.

Could prob tweak it by having a seperate recovery threshold or weighting the moving average.

Since its needed as a “back-stop” to stop flooding a notification app like pushover/twitter/telegram, then precision prob not be important

[{"id":"192a8efa.ee7ea1","type":"inject","z":"f09d4a19.a446e8","name":"","topic":"","payload":"","payloadType":"date","repeat":"","crontab":"","once":true,"onceDelay":0.1,"x":170,"y":80,"wires":[["3ed9adb0.bf6d12"]]},{"id":"3ed9adb0.bf6d12","type":"change","z":"f09d4a19.a446e8","name":"","rules":[{"t":"set","p":"intervalTotal","pt":"flow","to":"5000","tot":"num"},{"t":"set","p":"lastMsgTime","pt":"flow","to":"$millis()","tot":"jsonata"},{"t":"set","p":"payload","pt":"msg","to":"intervalTotal","tot":"flow"}],"action":"","property":"","from":"","to":"","reg":false,"x":420,"y":80,"wires":[["306efbec.542ea4"]]},{"id":"306efbec.542ea4","type":"debug","z":"f09d4a19.a446e8","name":"","active":true,"tosidebar":true,"console":false,"tostatus":false,"complete":"false","x":670,"y":80,"wires":[]},{"id":"90e9b3b3.a3459","type":"inject","z":"f09d4a19.a446e8","name":"","topic":"repeat 1/sec","payload":"","payloadType":"date","repeat":"1","crontab":"","once":true,"onceDelay":"1","x":170,"y":220,"wires":[["8d3658d8.ac1728"]]},{"id":"66133c25.ebd8a4","type":"debug","z":"f09d4a19.a446e8","name":"","active":false,"tosidebar":true,"console":false,"tostatus":false,"complete":"false","x":970,"y":280,"wires":[]},{"id":"fe0c169.9d010e8","type":"change","z":"f09d4a19.a446e8","name":"","rules":[{"t":"set","p":"intervalTotal","pt":"flow","to":"$floor($flowContext(\"intervalTotal\") * 3 / 4)","tot":"jsonata"},{"t":"set","p":"intervalTotal","pt":"flow","to":"$millis() - $flowContext(\"lastMsgTime\") + $flowContext(\"intervalTotal\") ","tot":"jsonata"},{"t":"set","p":"lastMsgTime","pt":"flow","to":"$millis()","tot":"jsonata"},{"t":"set","p":"payload","pt":"msg","to":"intervalTotal","tot":"flow"}],"action":"","property":"","from":"","to":"","reg":false,"x":680,"y":280,"wires":[["66133c25.ebd8a4","874c063.1ce5cf8"]]},{"id":"c3b9bfb3.6fcb8","type":"switch","z":"f09d4a19.a446e8","name":"","property":"intervalTotal","propertyType":"flow","rules":[{"t":"gte","v":"4000","vt":"num"}],"checkall":"true","repair":false,"outputs":1,"x":650,"y":200,"wires":[["ffb43b8d.cf71e8"]]},{"id":"ffb43b8d.cf71e8","type":"debug","z":"f09d4a19.a446e8","name":"OUTPUT msgs","active":true,"tosidebar":true,"console":false,"tostatus":false,"complete":"payload","x":980,"y":200,"wires":[]},{"id":"874c063.1ce5cf8","type":"debug","z":"f09d4a19.a446e8","name":"Time over 5 msgs (ms)","active":true,"tosidebar":false,"console":false,"tostatus":true,"complete":"payload","x":1000,"y":380,"wires":[]},{"id":"8d3658d8.ac1728","type":"counter","z":"f09d4a19.a446e8","name":"","init":"0","step":"1","lower":null,"upper":null,"mode":"increment","outputs":2,"x":400,"y":220,"wires":[["c3b9bfb3.6fcb8","fe0c169.9d010e8"],[]]},{"id":"b6711f67.d6aae","type":"inject","z":"f09d4a19.a446e8","name":"","topic":"","payload":"Extra Message","payloadType":"str","repeat":"","crontab":"","once":false,"onceDelay":0.1,"x":140,"y":300,"wires":[["8d3658d8.ac1728"]]}]


Is there really no way to tell from the content of the messages that there is a problem?


The idea of this node/flow is that it deals with unforeseen problems

Its a goalkeeper node/flow :slight_smile:

In my case, it is a flow that sends me the current cheerlight based colour via Pushover and used to tweet it out as well (until Twitter revoked the account due to it sending too many tweets in a short space of time!)

PS My flow works fine for weeks on end but has gone off-piste twice - not been able to determine why so hence the need to have an emergency automatic stop