This might sound like a stupid question but there you go!
This is the scenario. I am requesting a list of files from One Drive and then sending this list (about 1000 files) through to a split node. A download of said files is then requested.
This request is made with Fetch and via the M365 javascript SDK.
The SDK has a get stream method that firstly gets the download URL and then requests the file as a binary stream.
What happens ocassionally with this request is that I get a fetch failed error for multiple downloads. Sometimes fetch fails when a relatively small number of downloads are requested for example 500.
I have monitored node-RED with Prometheus and have found that there is no noticable increase in event loop lag or excessive CPU usage.
As far as I understand fetch failed means no response received. A failed request. So another question is, does the problem relate to the number of fetch requests being made?
Is the node being swamped with that many requests that it is just getting some kind of throw back from Fetch. If this is the case what is the best diagnostic tool to get some qualitative data?
As you can see I am possibly confused/stuck any help would be greatly appreciated
Off the top of my head, that sounds more like a networking issue than anything else. I would say that requesting 500 files for download is NOT small though. I could be that you are triggering some kind of data loss protection from your Office 365 tenancy. Other things that might have an impact would be if others have access to the same files, you could get a lock clash.
Personally, I would split any request into much smaller requests and for any that fail, keep track of them and retry them when the rest of the requests have completed.
Absolutely. You have full access to Node.js in the runtime code of your node - assuming it is a custom node.
If you are meaning a function node, you would need to load the 'node:worker_threads' module in the startup tab of the function node. I think that will work but I've not tried it. Bear in mind that the code in a function node runs in a node.js VM sandbox so it is possible that not everything will work.
If that fails for function nodes, you should create your own node.js module and require it into global vars in settings.js (or you could write your own plugin possibly). Your module would then be used to take care of the worker thread.
Honestly though, I don't believe that this is the issue, if you are using fetch, it is already async and so does not block the main loop.
That is what I think. the requests are not decorated, so I figure its triggering some kind of firewall. i am asking support on the Azure portal so hopefully their network diagnostics will show what is happening.
As for breaking the messages down with a queue or delay node with a timeout: complete sense.
Thanks for the clarification on workers, it is something I might research not for fetch more for write to disk operations.
For the most part, file I/O is also async so you are mostly also good there as well. Though that can occasionally catch you out with things happening out of order.
Node.js is remarkably good at not blocking its own main loop. Which is why Node-RED can often handle a surprisingly heavy workload.
Ah cool,, one thing i notice about write to disk is that it rapidly increases the amount of minor garbage collection. However major GC seems to take a while to kick in. In more nomal circumstances an increase in minor GC seems to result in an immeadiate increase in major GC.
I have noticed the order switching, I guess this is to do with async.
GC is a black art for sure! The detail is mind-bending. I think that node.js v22 contains some GC improvements but trying to predict how it will work in practice seems to be rather more effort than is worth it.