I understand generally that function nodes are less efficient than doing the same change via built in nodes.
I am thinking about a situation with 2 flow 1a and 1b.
1a takes inputs processes it through a complex function node and then sends output downstream.
1b takes inputs uses a function node to transform it into the 1a format and then sends that output to the 1a complex function and then downstream.
Is it likely that it would be more efficient to change to complex function node to first identify 1a vs 1b case and do the transformation before using common code thereafter or is the cost of 2 function nodes tiny and not worth refactoring in this way? I think the test would be a simple if statement
As with most things... it depends; mainly on your expected message throughput, the complexity of your function and your hardware limitations.
Yes, the function node executes its code in a VM which can have an impact on performance, but for most "normal" use cases, that should be negligible.
All in all, those a things you should simply try out and measure the limitations for your use case, and not fall victim to premature optimization.
If function nodes become a bottleneck, there are solutions of course.
Also keep in mind, that CPU-intensive workload can block your NodeJS' event loop, which affects everything running in your Node-RED instance.
I think the topic has also been covered here to some extent a while back: Function Node, which code is more efficient?
Unless you are running this thousands of times a second then really don't worry about it. Use your time to make sure the flows are well structured and easy to understand. That will make it more likely to work as intended and easier to maintain. Never complicate things because of a small possible performance increase, which in practice will almost certainly turn out to be completely irrelevant.
Given my use cases, I think it really boils down to no need to worry about optimization as much as I should focus on clarity and maintainability.
The challenge is that depending on what I am focused on sometimes more nodes doing simpler steps is better and sometimes encapsulating many steps into a single node is better. I often think the right answer is to do the many nodes with simple steps and then encapsulate them into a subflow. This approach, however, can be a little tricky when debugging.
So in the end it is a case of making the best guess on how I will need to interact with it in the future.
Thank you both for the thoughtful replies.
Regarding the original topic title "Function node efficiency"
There is an overhead but it is surprisingly small. It all depends on what you are doing.
Look at the screenshot in this
- function node sort 4ms
- JSONata sort in change node 1222ms.
You can use/re-use the
flow-timer in your code if you are interested in measuring sections of flows.
Thanks for this pointer to your timer subflow. This makes it very easy for me to play around and see if it anything I am contemplating actually consistently makes a flow more efficient.
This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.