I have a MQTT host that charges for a specific rate of data points per minute. I'm able to stay below this rate for outbound traffic by pacing the inject nodes that trigger the flow. However, if I have many MQTT in nodes in a given flow, they all send subscribe messages when a flow starts up (or changes are deployed) and this burst of traffic exceeds my rate limit. Is there a way to rate-limit the initialization of MQTT nodes? If not, is there a way to stage flows to start in sequence or delay the start of a flow at startup by a set amount of time? Thanks.
Do you see the same issue if you use dynamic subscription?
The node would still connect to the broker at deploy but not subscribe until you send msg.action "subscribe" and msg.topic.
Since you have many mqtt-in nodes on a single flow, presumably they are related.
Can you combine some nodes by choice of topic, using mqtt wildcards and a switch node?
Thanks, I didn't know about either of those features. I'll do some experimenting and report back.
subscribing to the topic USERNAME/f/#
should do what you expect/need, with the feed key being wildcarded (collecting the 47 feeds into one subscribe block)
Wildcards certainly do help, as it can drastically reduce the number of mqtt-in nodes. This obviously comes at the cost of flexibility, as you have to break out all the topics near the wildcard mqtt-in node, or use multiple mqtt-in wildcard nodes and essentially over-subscribe for each one.
The use of dynamic mode + a delay before initializing the mqtt-in node with a subscribe action did initially work to reduce the flood of subscribes at deploy time, but we later got bit by a related failure mode where the broker got disconnected while running, and automatically tried to reconnect. At the time of reconnect, all previously subscribed topics are immediately re-subscribed and we're back to triggering the subscribe-flood condition.
For now, our hosting provider is addressing this issue by rolling out a relaxed policy, but the underlying failure mode will still be there (it's just a matter of scale). Ideally, I think the best solution would be to add a configuration option to the mqtt-in node to queue subscription / re-subscription messages, possibly with a optional delay interval or rate-limit term.
I should think most people would want mqtt connected and subscribed at the earliest possible moment after startup.
Do you have a lot of Retained topics? Make sure that you only retain those that need to be retained. That may reduce the flood of messages.
Your proposal is not as simple as you might think. In fact, this is the first time I've heard of such restrictive conditions. You may wish to consider an alternative broker.
Alternatively, just subscribe to the base topic wildcard and use a filter instead. Here is an example filter I knocked up...
https://flows.nodered.org/flow/e38554543a8cdec6c44eacfc68a0c149
What exactly is a data point in this setup? In other words exactly what is it that they charge for?
The billable units are publish requests / minute. However, there are several other fixed limits in place that protect against abuse / DoS including:
- subscribe requests / minute
- failed subscribe requests / minute
- failed publish requests / minute
Anyone interested in this specific issue can follow along here.
The user documentation for these limits can be found here.
@Steve-Mcl quote="mharsch, post:10, topic:94660"]
subscribe requests / minute
[/quote]
On the assumption that that is the one causing the problem then @Steve-Mcl's suggestion should work as it will be only one subscribe.
The snapshot of your mqtt-in nodes in that Adafruit forum shows 27 nodes, all subscribing to kd0ycl/f/something.
You can certainly replace all of these with a single subscription to kd0ycl/f/+ or kd0ycl/f/#, reducing your subscribe message volume by 95%.
This is surely the way to go, no need for the more complex dynamic subscription (which anyway you have found the problem with)
If you want to avoid the massive switch node on your flow tab you could have the mqtt-in and switch on it's own tab, using link nodes to pass messages to the tab[s] where you process them.