Handling retry on failure from HTTP nodes

Hi,
I am trying to implement a simple HTTP call with retry on failure. The failure could be a network issue that generates an "exception" or a service failure, where the HTTP call returns a failure code (say, not 200) and it is not an exception.
In both cases I need to retry the HTTP call. I tried implementing the flow as given below. I connected the output of the catch node to the http request node and I also connected the switch node that evaluates if the call was successful based on statusCode, back to the HTTP node (i.e., in cases of failure).

What I noticed is that when there is an exception, the flow goes to both the catch node and also the output node of the HTTP and then the retry is triggered twice - once by the catch node and once by the switch node. As the number of retries increase (say n retries), the number of times the retry is triggered increases 2^n times.
What is a good way to implement retry on failure without increasing the number of times I call the HTTP API, so that on each retry the HTTP API is called just once?

Thank you!

Here is an image of the flow:

Here is the export of the flow:

[{"id":"dc1d0825.16e4e","type":"tab","label":"Retry on Failure","disabled":false,"info":""},{"id":"5da657f4.12c38","type":"inject","z":"dc1d0825.16e4e","name":"","props":[{"p":"payload"},{"p":"topic","vt":"str"}],"repeat":"","crontab":"","once":false,"onceDelay":0.1,"topic":"","payload":"","payloadType":"date","x":260,"y":280,"wires":[["b314a7a6.381208"]]},{"id":"b314a7a6.381208","type":"http request","z":"dc1d0825.16e4e","name":"","method":"GET","ret":"txt","paytoqs":"ignore","url":"localhost:1880/test","tls":"","persist":false,"proxy":"","authType":"","x":470,"y":280,"wires":[["37ee3b77.2e45bc"]]},{"id":"8bc5467e.c95628","type":"debug","z":"dc1d0825.16e4e","name":"HTTP Success - No retry","active":true,"tosidebar":true,"console":false,"tostatus":false,"complete":"true","targetType":"full","statusVal":"","statusType":"auto","x":860,"y":280,"wires":[]},{"id":"c336f39a.fbeaa","type":"http in","z":"dc1d0825.16e4e","name":"","url":"/test","method":"get","upload":false,"swaggerDoc":"","x":220,"y":80,"wires":[["897842c7.6d77f"]]},{"id":"daed035a.1b44a","type":"debug","z":"dc1d0825.16e4e","name":"HTTP Request that never responds","active":true,"tosidebar":true,"console":false,"tostatus":false,"complete":"payload","targetType":"msg","statusVal":"","statusType":"auto","x":810,"y":100,"wires":[]},{"id":"c9c1e4a3.7601c","type":"catch","z":"dc1d0825.16e4e","name":"","scope":["b314a7a6.381208"],"uncaught":false,"x":340,"y":360,"wires":[["37a0f966.394266","b314a7a6.381208"]]},{"id":"37a0f966.394266","type":"debug","z":"dc1d0825.16e4e","name":"Exception Caught","active":true,"tosidebar":true,"console":false,"tostatus":false,"complete":"true","targetType":"full","statusVal":"","statusType":"auto","x":520,"y":400,"wires":[]},{"id":"e9ceef2d.66d188","type":"http response","z":"dc1d0825.16e4e","name":"","statusCode":"500","headers":{},"x":740,"y":60,"wires":[]},{"id":"37ee3b77.2e45bc","type":"switch","z":"dc1d0825.16e4e","name":"200 success?","property":"statusCode","propertyType":"msg","rules":[{"t":"eq","v":"200","vt":"num"},{"t":"else"}],"checkall":"true","repair":false,"outputs":2,"x":650,"y":340,"wires":[["8bc5467e.c95628"],["daea08ff.f49b28","b314a7a6.381208"]]},{"id":"daea08ff.f49b28","type":"debug","z":"dc1d0825.16e4e","name":"HTTP Failure - retry","active":true,"tosidebar":true,"console":false,"tostatus":false,"complete":"true","targetType":"full","statusVal":"","statusType":"auto","x":850,"y":360,"wires":[]},{"id":"897842c7.6d77f","type":"random","z":"dc1d0825.16e4e","name":"","low":"1","high":"10","inte":"true","property":"payload","x":400,"y":80,"wires":[["65a19e41.fb0938"]]},{"id":"65a19e41.fb0938","type":"switch","z":"dc1d0825.16e4e","name":"","property":"payload","propertyType":"msg","rules":[{"t":"gte","v":"6","vt":"num"},{"t":"else"}],"checkall":"true","repair":false,"outputs":2,"x":570,"y":80,"wires":[["e9ceef2d.66d188"],["daed035a.1b44a"]]},{"id":"fe6e8332.e70de","type":"comment","z":"dc1d0825.16e4e","name":"External Service","info":"","x":250,"y":40,"wires":[]},{"id":"608df367.b54f4c","type":"comment","z":"dc1d0825.16e4e","name":"My code that calls External Service","info":"","x":320,"y":240,"wires":[]}]

One option would be to put a rate limit node set to 1 msg per second immediately before the http node.

Note, it isn't really a good idea to have the catch node retry the operation it catches as you could potentially cause a loop that crashes node red.

Thank you. Adding a rate limit node does help avoid multiple triggers. The only challenge is to find the right time to rate limit it to. When the service returns an error code (say 500), it does so under 1 second and so, the retry does not work. I can find a time interval that would work for both an exception and an error and set the limit.
In the real application, the catch node has a function node after it that keeps track of the "retry count" and stops retrying after a max limit is reached, so that it does not go into an infinite loop.

That can be solved with an additional 1sec delay from the function retry output and another from the catch node (both delays feed into the rate limit).

It's not pretty but it will perform a retry and will limit the rate and will prevent duplicates.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.