I have been experimenting with Ffmpeg in Node-RED in the past, but now I'm wondering perhaps I was doing it all wrong. Would love to have your opinions about it ...
I think currently there are (at least) 3 ways to integrate Ffmpeg into Node-RED, each with their own advantages and disadvantages. It all depends on how Node-RED communicates with FFmpeg.
File-based approach:
Most ffmpeg nodes that currently exist communicate file-based with Ffmpeg.
Some node (e.g. file-out node) writes a file in the physical filesystem.
A trigger is passed to an Exec/Daemon/Ffmpeg... node.
That node starts a separate Ffmpeg process, and passes command line parameters to it (e.g. containing which input and output files need to be used).
The Ffmpeg process reads the input file(s) from the physical filesystem.
The Ffmpeg process does all processing.
The Ffmpeg process writes the output file(s).
The next node in the flow is triggered.
That node (e.g. file-in node) reads the output file(s).
Pipe-based approach:
Since the file-based method results in a lot of disc IO (which might be slower or ruin an SSD card), I have tried hard last year in several discussion to implement a memory-only solution based on pipes. Although Ffmpeg is normally file-based, it also supports input via an stdin pipe and output via an stdout pipe:
Some node sends a message (containing the ffmpeg input data) to an Exec or Daemon node.
The Exec/Daemon node starts a separate Ffmpeg process, and passes command line parameters to it. In the command you need to specify the input/output pipes! And you need to specify the formats (= codecs) via the "-f" parameter, because Ffmpeg normally looks at the file extensions to determine the formats...
The Exec/Daemon node sends the input data (from the message) via the stdin pipe to the Ffmpeg process.
The Ffmpeg process does all processing.
The Ffmpeg process writes the output data to the stdout pipe back to the Daemon/Exec node.
A message (containing the output data is send to the next node in the flow.
While this works rather well it has also some severe limitations:
Most Ffmpeg commands that you find on the web will use input/output files, which you will need to adapt very heavily to support pipes.
Not all Ffmpeg functionality supports pipes. E.g. you cannot create an mp4 file.
There is a delay of one image.
...
Wasm based approach
The pipe-based solution had a lot of weaknesses, and it is might be a challenge to get ffmpeg installed. Therefore I experimented last week with a WebAssembly (Wasm) version of Ffmpeg: see node-red-contrib-ffmpeg-wasm. In this case Ffmpeg has been converted first to Javascript and then to Webassembly (for performance), which means you can install it automatically as a node dependency.
This node uses an in-memory filesystem:
Some node sends a message (containing the ffmpeg input data) to the Ffmpeg-wasm node.
The Ffmpeg-wasm node writes a virtual input file in the in-memory filesystem.
The Ffmpeg-wasm starts the wasm version of Ffmpeg in a separate NodeJs worker.
The Ffmpeg process reads the input file(s) from the in-memory filesystem.
The Ffmpeg process does all processing.
The Ffmpeg process writes the output file(s).
The Ffmpeg-wasm node reads the virtual output file from the in-memory filesystem.
A message (containing the output data is send to the next node in the flow.
This has some advantages compared to pipes:
You can use normal Ffmpeg commands from the web, which work with input/output files.
There are no delays
Ffmpeg is installed automatically.
But there are some disadvantages compared to the physical file-based approach:
It uses permanently 25% of the memory on my RPI model B (to load the wasm).
Since all files are loaded in memory also, you might end up with memory problems (e.g. when converting N images to an mp4 video).
So the wasm based approach seems better to me, compared to the pipe-based approach. But it seems to me that the file-based approach offers offers more possibilities (since it uses much less RAM memory). On the other hand you need a decent physical filesystem, so the SSD card of a Raspberry wouldn't last long...
Did you also notice what cpu load it was for the various approaches?
It sounds to me the wasm would be the best. You say 25% on a RPi3 model B. If it's always 25% of what is available, I would say there is a problem, otherwise no problem, just require a new RPi4 with 4G for a video solution on a Pi
SSD card? You mean SD card I assume? Otherwise, if you would boot the Pi from USB using an SSD disk, I think the file based approach would be ok as well, the SSD disk will last, it's built for extensive read/write operations
I think it would depend on what you are actually want to achieve. For example, if you want to use ffmpeg to convert a real file on disc from one format to another then I think the file based approach would be the one to use. I cannot see any disadvantages to it, other than needing to install ffmpeg, but most distributions have a working version of ffmpeg so I would not normally expect that to be a problem.
I think one cannot make a general decision on which solution is the best.
What use case in particular are you thinking of?
Personally I don't think I would do that with ffmpeg, and I wouldn't do it within node-red. I would setup vlc to restream the incoming rtsp stream (if that is what we are talking about) as a stream that can be interpreted directly by the browser, such as mpeg, and display that.
No I don't have a native ffmpeg running at the moment. Failed recently to install it, which was the main reason I switched to the wasm version ...
Yes I requires some memory to load the wasm version of ffmpeg into memory. This memory will stay occupied until you stop the wasm worker.
Oeps yes I mean SD card.
Yes that is something I should perhaps do best in the future..
I have no particular use case. Would like to have a solution for all kind of use cases: image conversions, audio conversions, creating mp4 videos from images, decode rtsp streams, ...
Yes that is an alternative approach. But in this discussion I would only like to focus on ffmpeg, and find a setup that allows this kind of functionality also ...
The problem with the wasm approach might be memory full issues. E.g. when I want to create an mp4 file from a series of images (from my ip cam). Then both the images and the mp4 will be loaded into the in-memory filesystem. If you do that for multiple camera's, I think it would blow up your RAM. Therefore I'm now thinking that the file based approach might be the best way to go. But indeed that I should have an SSD disc or something similar ...
Hello, in the end what I have used is the first option
I use ffmpeg in the exec node and I also execute with exec (not to complicate myself) the file websocket-relay.js from the jsmpeg library: "https://github.com / phoboslab / jsmpeg "
however, as the live broadcast process did not stop, I ended up attacking the PID of each task directly (using the status node to point to the exec node)
At the moment I cannot share the flow, for work reasons, but I can affirm that I have achieved a live broadcast via web with node-red and record these videos in 1080p using directly the ffmpeg in exec node ( I also use the ping node to reconnect)
I only have two problems: that the recording I can only assign a predetermined time or it breaks the file and I have a pretty messy code too XD
@BartButenaers
I think if you would consider a small SSD disk like 128G to boot from, you could setup a huge swapfile, you would get a lot of extra memory for doing such stuff with acceptable performance. If I'm correctly informed, even the USB 2.0 on the RPi3 is faster than the SD card so an SSD disk would not be a bad choice. Of course it would be very, very much faster on a RPI4.
However, on a RPi4 you still need a SD card for booting. People are since long waiting for an update so they can just use the USB device alone. With the RPI3 only a USB device is needed. I have a 32G USB stick used in one of my RPi3's that it is booting from, works great. Later I will upgrade to SSD and RPi4 when the above mentioned issue is solved
To record, I directly use the command: "rtsp://user:pass@mycamera/etc" -t 10 -c copy C:/Users/user/Desktop/1585572692116.mp4 -y
I specify a time: -t 10
If I don't specify this, I have to stop recording somehow.
In CMD I would do ctrl + c to stop recording, and the process would finish correctly
In node-red I have tried msg.kill with SIGINT, SIGKILL, SIGTERM ... but the file becomes impossible to open.
I believe that is because you are creating an mp4 file, which has information written at the start of the file but which is filled in when the file is closed. If you write it as a ts file it should be ok as that format is designed for streaming. Though dependent on what you want to do with it you might have to feed it through ffmpeg again to make the mp4 file. The command I use for this is effectively ffmpeg -rtsp_transport tcp -i rtsp://user:pass@etc -c copy filename.ts
VLC can play .ts files directly, some players can't
A straight conversion to mp4 with -c copy should use very little CPU, it hasn't got to interpret the pictures, just repackage them, at least I think that is the case. The bulk of the processing when you are reading rtsp is decompressing the rtsp stream.
Thanks a lot guys for your valuable input !!!!!!!!!!
Just wondering. Suppose I have to do separate conversions (so I am not talking now about continious streaming), for example convert audio chunks from mp3 to wav. This means I have to call ffmpeg (via an exec or daemon node) N times per minute. Do you need to spawn a new ffmpeg process for every chunk, or can you reuse the same process to handle all the chunks. Just asking because it seems a lot of performance overhead to me, to spawn and kill those ffmpeg processes for every chunk ...
Yes I think you are right. Some questions about this:
Would you propose to use a RPI 4 for this or another device perhaps? Because you have been discussing other hardware boards in the past on this forum ...
About the huge swapfile. Is that also required if I go for the file based approach?
Can you advise some tutorial to setup such an USB SSD connection on an RPI?
FYI: That was for example one of the things why this doesn't work with the pipe-approach. It cannot fill those fields anymore afterwards, when that data has already passed to the stdout pipe ...
No, not in my experience. A 2GB Pi 4 should be more than adequate. Depending on what else you are doing of course.
Converting audio should be trivial compared to video so converting from mp3 to wav (why?) will be something a Pi 4 could do in the background whilst determining the Answer to the Question as its day job.
You could actually start already with your RPi3 and later upgrade to a RPi4 to get the huge I/O improvements due to USB 3.0. I would also select the 4G version to get more memory. It's true, I have tried a couple of other boards but as long as more powerful GPU's are not needed, I do believe a RPi4 is a good choice unless you already now see that those cpu's are overloaded when you run your tests with ffmpeg
Maybe not, it depends how much memory you need for the processing of the various approaches, a swap file is basically an extension of your memory but on disk. So here a RPi4 with USB 3.0 and a SSD disk will make a huge improvement in performance since the I/O is so much faster than in the RPi3
I configured using sda1 and sda2, I did not use the UUID's, I plan to do that later when/if I upgrade to a RPi with a SSD disk. Until then, I will let my RPi3 with the USB stick running as is, it will also be a good test to see how it works
If you decide to try, I'm more than willing to assist
I have somewhere read in the past that an usb stick cannot be written often, so I always thought that this wasn't an option. But seems that information wasn't correct...