I have daily instabilities with http requests in a node red docker container running in azure. So much so I wrapped http request node in a retry subflow. This usually worked, however after I made a lot of changes, I now get ECONNRESET as an exception which bypasses the entire retry logic. Because I filter statusCode on output msg and to to retry from there.
I made a lot of changes including restructuring of the subflow and upgrading to NR v4.0.9. So unsure what exactly changed the behavior. This is an excerpt of the msg (logged to file for convenience):
"error": {
"message": "RequestError: read ECONNRESET",
"source": {
"id": "733cb730917372cc-04d357fffaef5001-c0c495ea6350d5ae-8ffc9f12f7f0913a-2d874e4ac27c28f3",
"type": "http request",
"name": "http request",
"count": 1
},
"stack": "RequestError: read ECONNRESET\n at ClientRequest.<anonymous> (file:///usr/src/node-red/node_modules/got/dist/source/core/index.js:790:107)\n at Object.onceWrapper (node:events:639:26)\n at ClientRequest.emit (node:events:536:35)\n at emitErrorEvent (node:_http_client:104:11)\n at TLSSocket.socketErrorListener (node:_http_client:518:5)\n at TLSSocket.emit (node:events:524:28)\n at emitErrorNT (node:internal/streams/destroy:170:8)\n at emitErrorCloseNT (node:internal/streams/destroy:129:3)\n at process.processTicksAndRejections (node:internal/process/task_queues:90:21)\n at TLSWrap.onStreamRead (node:internal/stream_base_commons:216:20)"
}
Has the http node changed behavior? Not sure how I can trigger this problem locally (I have never got this error outside azure). Perhaps add another exception node to listen to http node?