It has been observed that the HTTP GET endpoint created using Node-RED is resulting in reduced transactions per second (TPS) compared to our direct HTTP GET endpoint. I suggest we investigate the potential causes for this performance drop, as optimizing the Node-RED implementation could enhance our overall throughput and system efficiency.
This forces us to use nodeJs because of the TPS reduction. For example, a direct HTTP GET Endpoint created with an Express server gives 50 TPS, while a Node-RED HTTP GET Endpoint gives only 20 TPS with the same response size.
We would appreciate any prompt action to this situation.
Thanks!
I am facing the same issue.
Unless you are paying for support from FlowFuse, action will depend on someone having sufficient willingness to give up time for other work.
So it would be sensible for you to begin using some tooling to investigate where the slowdown is happening.
1 Like
@jaythakor-ai your account and @Hemangi account were created very close to each other and have the exact same IP - are you the same user?
As Node-RED is open source, feel free to begin your investigation and if you find a solution, you can raise it on the forum or raise a Pull Request against the project.
1 Like
Just to add to what the others have mentioned.
This behaviour is probably to be expected. You have given no details about the flow you are testing, but I fully expect Node-RED to be doing a lot more work than the simple express application.
e.g. given even a flow like the following: http-in -> function node -> http-out
- Node-RED receives the request
- Builds the msg object (this involves some deep cloning of the req, res objects) and passes it to the Function node
- creates a nodejs vm sandbox to run the function node
- Executes the function
- passes the message to the http-out node
- sends the response
The work around the function node is not a trivial step and would probably account for most of the difference.
This is of course also not including any other node/flows that may be running or things like debug nodes sending data any open editor sessions over the websocket connection.
Node-RED's main focus is to provide a visual flow based programming environment to allow none programmers to build solutions, it values simplicity and ease of use of raw over performance.
Now, that's not to say that performance improvements can not or should not be made, if you want to investigate this and suggest improvements they would be greatly appreciated.
If you would like others to help, then you have to make it as easy as possible, since as others have pointed out, this is a volunteer community mainly working in their spare time. Please provide full details of the flows you tested, the express app you are comparing to and details of how you are driving the load in your tests.
2 Likes
@Steve-Mcl, we are not the same user. We are both working on the same project and trying to find a solution.
@marcus-j-davies, we are not the same person. We are both working on the same project and trying to find a solution.
@hardillb, I agree with you that Node-RED doing a lot more work than the express app and of course, because of it, could be possible to increase the response time in milliseconds. I will investigate more about what is the reason for TPS reduction in this case.
So, I am doing the load testing with 90 users for 1 minute. I am calling the GET endpoint which returns the 1 mb string in the response. For Node-red, it is around 20-25 TPS, and for express same, it is around 50 TPS for the same endpoint.
@TotallyInformation thanks for the quick response. I will investigate more about where the slowdown is happening and try to find a solution for it. Thanks!
Can you share this flow so we can see if the bottleneck is in how you wrote the flow?
To share the nodes flow, select the nodes, press ctrl+e, copy the flow JSON to clipboard, paste it into a code block in your reply.
Sure. Below is my flow.json
[{"id":"a637f9d2.7eb688","type":"tab","label":"test","disabled":false,"info":""},{"id":"24928fa7.e22fd","type":"http in","z":"a637f9d2.7eb688","name":"","url":"/test_endpoint","method":"get","upload":false,"swaggerDoc":"","x":300,"y":540,"wires":[["d30b270f.9472c8"]]},{"id":"d30b270f.9472c8","type":"function","z":"a637f9d2.7eb688","name":"","func":"var require = context.global.get('require');\nmsg.payload = \"1MB Json string...\";\n\nreturn msg;","outputs":1,"timeout":10000,"newThread":false,"noerr":0,"x":590,"y":520,"wires":[["c0eb3c6f.24f33"]]},{"id":"c0eb3c6f.24f33","type":"http response","z":"a637f9d2.7eb688","name":"","statusCode":"","headers":{},"x":833,"y":515,"wires":[]}]
Admin edit: wrapped flow in code block
Can you explain why you access something called require in the function but never use it. I assume you use the require to load your 1 MB json string in your real test?
Where does the 1 MB string actually come from? File? Database?
Will your final solution always be returning something around about 1 MB? Or is this purely academic for test purposes? I asked because it is not particularly realistic.
Also, are you not prematurely optimising? I mean will 90 users be continuously and simultaneously requesting the same GET endpoint in your final application?
Lastly, assuming you managed to import require, this is a synchronous function and will likely impact multiple accesses. As you probably realised, require is not standard within the node-red function node. This is deliberate and by design. Modules are required using the setup tab of the function node. Files and data are typically accessed through context or included in the message that enters the function.
In addition I would recommend to store the json string into a context variable that lives in memory.
Correct. We can do that but for the static or hardcoded string or the response that not changing every time. In my case, the response will be always different based on the payload.
This will be a bottleneck.
Function nodes run every time from scratch. So, I suspect the require
imports will be ran, every time a msg passes into the function node.
That is what the setup tab is for.
Alternatively, do your requires in settings.js
once and then access them via context.
1 Like
Ok, I will try that. But, is this require
causing this slowdown in TPS??
Assuming you are calling require
on the same module each time, then the only first call will trigger all of the module loading overhead; modules are cached by node.js so a second call to require will just return whatever was cached previously. So whilst it may be unnecessary it is not going to be a significant overhead.
I am assuming the work you are doing in the Function node is equivalent to what you were doing in the Express app directly?
The simple fact is Node-RED, by its nature, just has to do a lot more work to handle each request - as @hardillb described in his reply. Some of that is the trade-off of using a low-code tool versus writing code from scratch where you can optimise every line for your specific scenario.