I have one http-in and multiple http-response nodes.
Due to infrastructure limitations, requests can't be executed longer than 30 seconds.
Also, one flow can contain multiple such workflows.
How can I kill the workflow if it is executed longer than 30 seconds?
I've tried to create a separate "branch" with a timer node and http-response. The "timeout" response returned, but functionality from other "branches" still continues. I need the whole workflow to be killed.
Is there a possibility of achieving killing full workflow when overall execution exceeds 30 seconds?
Or maybe there are some existing packages that provide such functionality?
Inspect the response from the http node (use a debug set to show the full message). You will see on a successful response, msg.statusCode will (likely) be 200. Use a switch node to inspect this and prevent the message being passed to the next node if it is NOT 200.
Hi,
The workflow is processing some data and can communicate with external APIs.
Normally it should not take longer than 30s, but some of the logic can have a bug, and the overall workflow because of this can stuck and take the server's resources for nothing.
Just to clarify - each workflow is an HTTP request with "http-in" and multiple "http-response" nodes depending on the processing logic.
That would be easier, but sometimes operating time can take longer than expected due to any reasons.
And the idea is to have a mechanism to actually kill ongoing request functionality, so server resources will not be wasted when the timeout response is returned, but flows will continue working.
Yes, I get that and you should certainly still do that. What I meant was that your users would get a much better experience by having the core web page delivered to them immediately with either the result data or a timeout notification delivered at the appropriate time. This also makes your web logic simpler by decoupling the web page delivery from the data management flow.
Set a value of 30000 (or less) in msg.requestTimeout to ALL HTTP requests
Check the response code of the HTTP Requests
Add all of the processing logic into a Group and then add a catch node that returns a 500 (or other appropriate status code)
As for "processing some data" I cannot advise since you dont detail what that is. for example, if you are looping a large array, you should be doing it using split and join nodes then you could check for a context flags that indicate the process should be halted. Difficult to assist without details.
To be more specific we need something to kill specific workflow to solve human-possible issues like a mistake in the flow which would lead to infinite processing.
Because just a separate branch with e.g. timer will return 500 error in the response, but that infinitive loop will continue working and use resources.
My last advice is "avoid loops because they can consume all CPU cycles, totally block the nodejs event loop and you cannot fix the node-red without stopping/starting its process".
As I said in my original post - use split/join nodes (and other techniques) before creating a loop.
TLDR;
If you create a runaway loop that consumes all the CPU cycles and blocks the event loop , it is almost impossible to break out of. However, if the CPU is not fully consumed and the nodejs event loop is not 100% blocked and can still process incoming events, then you simply edit/fix and deploy.
For those times you fall into this, your only option is to stop node-red and restart it in --safe mode (command line option), fix the flow, deploy.