When enabled, node-red would throttle its runtime based on memory allocation instead of exiting the whole process.
throttling : {
max_memory: process.env.NR_THROTTLING_MAX_MEMORY;
db_url: process.env.NR_THROTTLING_DB_URL;
}
There would exist a global queue controlling the runtime and messaging processing based on the reported amount of memory by the memory manager. Before stacking the call of a node, the manager would have to be consulted if there is enough memory in the allocated heap for the node process. The global queue has to store messages in a configurable db.
When HA is available, like in Flowfuse, a new metric called "throttling_queue_length" could be used by the load balancer to determine which container it has to send requests to. It would prefer sending requests to containers with undefined throttlin_queue_lenght or the one with the smallest value of throttling_queue_length instead of using round robin
Current Workarounds:
In order to avoid infinite container restarts
- increase container memory to be bigger than the empirical defined amount of memory my flow uses
- use a plugin that turns off the flow when memory reaches a certain threshold
- create flows with memory cap in mind. This takes extra effort from devs. There are many strategies like threating flows as transactions and never allow "parrallel" processing within the same flow. This isn't enought though. I once had to add timers to give time for the garbage collector to release memory before picking the starting the next transaction.