On a test Node-RED instance we face very slow update-time when we want to update a single flow (we tried /flows and /flow/:id APIs). The deployment of a single flow takes up to 20-30 seconds.
We currently have roughly 700 flows deployed (just a few nodes each). Is this expected? Can this somehow be improved?
Yes, we hide all flows but one (which one is provide during startup of Node-RED UI) and deploy this single flow only (in order to achieve this we slightly customized NR).
Modified flows reinitialises "all connected nodes" (those that connect via a wire).
The deploy oprtation ALWAYS sends FULL FLOW JSON, its the bit under the hood that changes:
Full Deploy destroys and recreates everything (connections are terminated, instances are destroyed, then all recreated)
"Modified Nodes" destroys recreates everything connected to a modified node (by a regular wire, not a dotted link wire)
"Modified Nodes" destroys recreates only nodes that were changed (and that must be a property change. x/y position doesnt count)
Yes, change "Deployment-Type" header as per the docs
Why do you have 700 flows with only a small amount of nodes on each tab? That is a LOT of overhead for little reason as far as I can see. Perhaps it is time to scale horizontally?
I understand, I just wanted to explain why we don't care about the 700 (which might seem a lot).
Modified flows reinitialises "all connected nodes" (those that connect via a wire).
As the flow is always a single flow this part would be fine I guess... however, we also noticed that always the full json is posted.
Why do you have 700 flows with only a small amount of nodes on each tab?
Because a user is only interested in one or a few flows out of the 700... we only want to show one flow at a time...
That is a LOT of overhead for little reason as far as I can see. Perhaps it is time to scale horizontally?
How would you solve it with less overhead? I don't get this... Regarding scaling horizontally... you mean to say we should use more than one Node-RED instance? Actually I believe this is coming up soon, yes. However, as you might imagine this than rather complex (auto scale-out and scale-in would be expected...).
You have not mentioned what hardware you are throwing at this - which could make a difference
I also recall someone else recently had a massive number of flows and was having problems with deployments - ended up being some orphaned config nodes that were point to incorrect/obsolete external servers if i remember correctly
Have you checked your config nodes to make sure they are all valid ?
I had a same "sluggish" behavior of Node-Red during development tasks .. I had a lot of flows (less 200) and a lot of global and local vairables.
I'm running Docker Node-Red and have laptop for development using Chrome. The Node-Red on docker is barely using any resources (~10-20 on average, 50% peak)
I just upgraded my 8GB laptop to 16gb memory and now everything is smooth again
WHat about I/O CPU and RAM during Deploy - i.e. where is the bottleneck as far as AWS is concerned ?
Managing 700 flows is a bit of a nightmare - i would definitely look at scaling horizontally with more instances and divide the flows logically amongst them
With 700 flows there could be any manner of crap hanging around in one of the flows and you would have no idea without stepping through each of them one at a time, remove the flow and try a deploy, rinse and repeat
I assume the slowdown has just happened recently or has it been a gradual thing ?