Not sure where you got that from, there were some old threads that were looking at the efficiency of the node.js VM that NR uses to make function nodes safer. But really, you will hardly ever see enough of a performance hit to be worried about.
Other than that, it is a trade-off between writing code in a function vs a higher-number of nodes. In your case, a single function node would need two other nodes to replace it, a switch node followed by a change node.
It seems a very odd requirement, I am intrigued by what the purpose behind the function is.
In fact it can be done with a single Change node, though personally I would probably use a Function node, I think it is easier to understand as a function and that is more important than a possible small performance improvement.
I admit I haven't done any performance tests on any aspect of Node-RED, but coming at this question from a coder's point of view, I'd be very surprised if the Function node wasn't faster than using the Change node. Unless the node is generating and executing some sort of code equivalent, the checks, conversions and processing will be a lot more complex.
That's one of the reasons I've gone down the route of using Function nodes for anything remotely complicated - that and the fact I've been coding for years and it seems more natural to me
the reason was not to gain better performance but to have a easier to understand and better readable flow.
Colin: i am using alexa to control my TV volume and to power it on and off.
in order to achieve that i am using the philips hue emulator so i am getting from alexa 1-100 and i need it from 0.01 to 1 (this is what the TV get) but in some scenario when you turn on the TV using alexa it get 100 which i never need or want (it's extremely loud) so i give it a 10% volume
That's it thank you all for your help (i learned that i can use the change node with a little code)
I believe the issue is that the function node runs in a VM which has to be setup each time the node runs whereas other nodes do not. However I also understand that JSONata also has significant overheads so there may be little to choose. The main point is that unless you are handling thousands of messages a second (or maybe hundreds in the case of something like a Pi Zero) then it is most unlikely to make any measurable difference how you handle tasks like this one, so do whichever way you are most comfortable with.
[Edit] In fact the above is not entirely correct. Function nodes do run in a VM, which does impose overheads, but the VM is not instantiated at each invocation. Further the issue with JSONata inefficiency has been addressed so this is no longer a problem.
Ah, that's interesting to know. I presume there are reasons not to have a permanently available VM, and I'm not going to second-guess the design choices behind the system without a lot more time to study the code.
As you say, for the average low throughput usage, it's probably not going to make a big difference, so I'll just use a mix and match of Function and other nodes as needed.
The vm is only instantiated once and re-used on each call. This does minimise the overhead - but there is still some caused by entering and leaving.
Likewise the JSONata bug that caused the slowdown has also been addressed and greatly improved.
Thanks Colin - good to know a bit more about what's going on under the hood.
If I wasn't so busy with other projects (oh, and work ) I'd get involved in some Node-RED hacking, although I do have ideas for a couple of handy nodes for when I get some spare time.