Hi Everyone,
My team is working on some flows for backing up files up to AWS S3 buckets. This is for back-up purposes and the files will then be moved from S3 to Glacier. The files themselves are coming from multiple locations (local http server, Dropbox & LAN folder). Some of the files are video files and are too large to be loaded into memory and we'd like to avoid copying the files locally.
Is it possible to pipe a FileStream between Nodes? I've looked at several nodes (the mulitpart and ffmpeg nodes, etc.) but none of them seemed to have anything of this sort, the only thing I was able to find were the Big* nodes by Jacques44 on Github.
Thanks for any help!
Unfortunately, the ability to use streams needs to be baked into a node by the author. This doesn't often happen. As you say Jacques created some nodes that certainly do use streams. I think some of the core nodes do as well.
In this instance, you might look at using Node-RED as an orchestrator for running external commands?
Of course, we would all welcome a new node that did use streams to copy files
File in and out are two of the core nodes that do support streams
You are talking about a a different type of stream.
The question here is about FS streams in the node.js sense.
The File In and out nodes support streams of messages in the Node-RED sense - sequences of messages with their msg.parts property set to identify their place in a sequence of messages
No they were rewritten to use inputstream in order to handle video files as chunks
Right, but that is subtly different to what you can do with streams in node.js.
Yes you can pipe one stream to another through a sequence of messages between nodes, but you lose the ability to pause and apply back pressure to the stream object itself. If that isn't a concern, then yes, the file nodes are an example.
Thanks for the info!
The process I'm working with can live with the restrictions of having the filestream converted to a stream of buffer messages. I'll take a look at the filenodes and see how they do it.
Is the stream of buffers the way that this should be handled going forward? Would it makes sense to enhance the Amazon S3 nodes to work the way the filenodes do?