[beta testing] nodes for live streaming mp4

yessss ! try this nice quality Red Bull TV: http://rbmn-live.akamaized.net/hls/live/590964/BoRB-AT/master_1660.m3u8

and more here for English : https://www.astra2sat.com/streaming/iptv/free-iptv-english/
See IPTV tab for more contents

Enjoy ! :wink:


1 Like

Yes, and with unlimited cash in pocket


You can put any unique string (I did put in norway for the test, I imagine you could put in each camera name without any spaces I guess)

How does your complete ffmpeg command line look like for such?

i use the ui_mp4frag like this
Copy and past the m3u8 url in the text imput node , and the streamming begin automaticaly

1 Like

as simple as that, i hadn't thought about it, great it works.

This will be part of the mp4frag node. I have given it a little thought, but haven't quite figured out how to make the internally buffered mp4 fragments available in node-red. My lib has the api to get the buffered segments, but how to use that in a flow? I was thinking that after mp4frag initializes, it could pass a reference to the internal buffer to make it available to create mp4 videos whenever it is called upon. Or make it available via context and hand the duty to another node for writing/saving mp4 video?

The api that I have to wrap already has some ways of grabbing the buffered mp4 content. For example, calling mp4frag.buffer will return the initialization fragment and all of the buffered segments as a single buffer, which is a complete mp4 video. Since I keep track of the duration of each segment, I will be adding a way to grab x number of seconds of video. Let's suppose that you fill up the buffered list with segments that are 2 seconds duration each. With the proposed api, we should be able to specify the total duration wanted and the api will return those chunks, which you can write to file as a complete mp4 video. There are more options, but until i figure out how to make it work in a node-red flow, then very little progress can be made.

1 Like

Is this for real??? Only 10-12% cpu load on an old RPi3 !!! So good quality

Here a link to a movie file demonstrating a small test with the dashboard:


Ok then we are on-topic :wink:

It is already VERY positive that you ask yourself: "how to do this the Node-RED way ...". Because it would be very simple to add a bit of functionality to your node, to write mp4 files on disc. But then it becomes difficult/slow to let other nodes process the output of your node, which is the real power of Node-RED...

When talking about passing references via messages to other nodes, you have to be aware of a couple of things:

  1. pluggable message routing has become a reality in the new Node-RED 1.2. This allows Nick a.o. to develop a distributed Node-RED core in the (near??) future. Which means that a 'group' of nodes in your flow might be running on another host. Those nodes cannot access that object via that reference!

  2. When the output of your node is wired to N other nodes, then your original output message will be send across the first wire. But on the N-1 other wires you will get a clone of the original input message. So you will get N message instances all referring to the same mp4 fragment. If those nodes start manipulating the fragment, you will get some very odd results. Because each node assumes that it is working on a clone of the original mp4 fragment...

So IMHO you should send the mp4 fragment itself inside the output message to the next nodes, instead of only a reference! Then your fragment can e.g. be processed by nodes running on a remote machine...

Although this approach can have some disadvantages:

  1. When the user starts using multiple wires on an output, the mp4 fragment will be cloned. Which results in a performance drop. However IMHO that is up to the user. If you explain that on the readme page, he can decide by himself if he wants to use a long chain of nodes (with only 1-to-1 connections), or whether he starts creating 1-to-N wirings... That is the reason that I added outputs to some of my nodes, which passes the input message to the output (to allow chaining). For example my node-red-contrib-msg-speed node:


  2. I don't know whether it is a problem if Node-RED starts cloning your mp4 fragment? If it is just a buffer of bytes, then there is no problem. But I know that e.g. OpenCv objects have there own clone method. Don't know how the object looks like in your case? Because currently custom cloning is not supported in Node-RED. I have done a proposal some time ago, but I haven't discussed further about it ...

Perhaps now I'm going really off-topic. But some time ago @hotNipi has created a very nice ui widget (node-red-contrib-ui-state-trail). And he has been so kind to implement a series of my feature requests, because I wanted to use it for showing time intervals for which recorded video footage was available. I see now that already somebody else has implemented that on his dashboard: see here.


Don't know if your API can allow such kind of stuff also? I mean that you send an input message with some search criteria (e.g. from_time and to_time), and that your node sends all the info of the recordings in the output message.
But perhaps i'm not interpreting your API correctly?

1 Like

6 posts were split to a new topic: Adding persistence to ui-trail node

After a second large cup of coffee, I think I know how this can work.

First of all, I would like to avoid copying buffer if possible. Just imagine a little pi running 20 ip cams, each one holding its own buffer of segments, then, those segments getting copied for other processes. I can see it going bad very quickly. Because this is meant for live streaming, the buffered memory of segments gets garbage collected after my internal list is exceeded allowing the oldest segment to get removed (currently max 15 segments). After typing this, I realize I should also include settings to not just limit things by the number of segments, but also the option to limit things based on total duration of segments or total buffer size, giving the user more control on how much memory gets used. This has to be built out in my other lib mp4frag that is used by node-red-contrib-mp4frag.

Once the buffer data leaves my node and the user wants to make copies, feel free to do so. But I have to ask for clarification, are we 100% sure that buffer data gets copied? I thought I read somewhere in node-red that only primitives are copied, but objects are kept as references, at least when getting passed between nodes that run on the server side. Maybe I dreamed this, can't remember.

So, getting to the point of making mp4 videos based on some sort of triggering event, such as motion detection or pressing a panic button or some timer event to record at midnight to catch the ghosts haunting your house, I think I have a node-red way of making it work for node-red-contrib-mp4frag.

Currently, it is tightly coupled to the output of exec running ffmpeg. On input, I check for the payload being a buffer, then pass that to the internal lib mp4frag for processing. If the input payload has a code or signal, this will be an indication that the ffmpeg process has stopped and I use that to clear my internal cache of buffered segments, etc.

There will be 2 outputs, one for status data, such as where the http route is for the playlist, or where/how to connect to the socket server, etc. The 2nd output will be dedicated to output the buffered mp4 pieces.

I will add a 3rd thing to check on input, and that will be a command that instructs node-red-contrib-mp4frag to output an mp4 buffer (to the 2nd output), with various options. For example, you may give it a payload structured as such to its input:

command: 'output_video',
output_type: 'buffer'// or 'object' if you want to include extra data {buffer, duration, segment count, etc}
past_duration: '5s', // include past segments that are held in memory
total_duration: '30s', // keep outputting the segments until total_duration is satisfied

The other thing to consider is that another node may send multiple command messages to this node before it can finish outputting from the previous command. So, we will either have to cancel the previous command, or extend it to by increasing the total duration due to continuous motion detection events. And of course, it will also be able to handle a cancel command, so that it can stop outputting video at the push of a button.

For the actual writing of mp4 videos to disk and keeping track of it in some database, that will have to be done in some other node that can receive it from this node.

That all depends if you are playing remotely hosted HLS videos only using ui_mp4frag or processing them on the server side with ffmpeg and mp4frag. I hope that load is for when using the server side stuff.

Yes, I think so, my flow is like this now, all working fantastic

You are processing 5 videos on the server side and things are running cool. That is great news.

1 Like

1- I have a first comparison to make between the pipe2jpeg node and this new mp4frag:

At the top we see the rtsp stream with mp4frag. Down pipe2jpeg.

  • The positive is that mp4frag, even in SD, the quality is superior to pipe2jpeg (unfortunately we can't see it here).
  • The negative side is that the streamming is blocked quite often with mp4frag, while pipe2jpeg lets the video flow almost without interruption.

2- we see at the end of the video a delay for mp4frag which he cannot catch up. It's even worse in FHD: I can be up to 20s late. The thieves will be gone before I step outside :rofl:

An idea @kevinGodell ?

1 Like


The problem we will always have when streaming fragmented mp4 is that at the very least, there will be a delay based on the duration of the previous media segment. This is because the mp4 container needs to have some specific info before ffmpeg can cut it and push it out. That will always be the nature when remuxing from rtsp to mp4.

The major delay you have is coming from the tens of thousands of lines of code in the hls.js library. It tends to fall far behind my custom made video player that transfers segments via socket io at the sacrifice of the video/audio not being as smooth.

A while back, I think @BartButenaers asked why we would have to make our own video player when there are countless libraries that already exist for that. Hls.js is like trying to kill a fly with a nuclear bomb instead of just using a swatter.

4 solutions come to mind, none of them perfect:

  1. If stream copying from your ip cam (-c:v copy), then program the camera to output smaller segments using its internal settings. Shorter duration segments = less delay.
  2. If unable to edit the cam settings to be less than 2 seconds(seems to be the standard), then you will have to re-encode the mp4 and change its structure to have smaller segment duration, but this comes with the expense of very high load on the server from ffmpeg.
  3. Tweak the hls.js config that is available in the settings of ui_mp4frag to make hls.js behave the way that works for you. There is a link to the hls.js docs in the help section of ui_mp4frag that show the many, many options to tweak.
  4. Build our own video player that sends the segments via socket.io to the browser with the tradeoff of not being as smooth as hls.js

As for idea 4, I am currently working on that, kinda. At the very least, I was able to setup a socket.io server and send the segment buffer data to the client (without using the builtin message system so we don't overload it).


I have one cam that is so junky, that its minimum segment duration is 10 seconds, 20 seconds when it is dark. Huge delay on that video. Another thing I have noticed on my cams is that when they are in night mode, they seem to automagically use a longer segment duration and I can't change it.

Try viewing your hls playlist in the browser and you can see that duration of the segments, etc. yourserver/mp4frag/the_unique_name/hls.m3u8.txt The .txt version is for debugging so that you can view it in the browser.

1 Like

Dear Kevin,
this is mainly a British community, so you should call it "afternoon tea" :wink:

Definitely!! Couldn't agree more.
But of course we don't have to become paranoia.
As long as you keep one long chain of nodes, there is no duplication. Duplication starts when multiple wires are connected to a single output. The first wire transports the uncloned message:


Which of course need to be mentioned on your readme page, so users can themselves decide what to do with it ...

But of course then you won't have much control about when a buffer will be garbage collected. Although you might create some kind of termination node that removes the buffer somehow, which users are advised to put at the end of their node chain ...

Until now I was yes (I mean not for the first wire at an output, but for all other wires). But you make me doubting about myself ...
Node-RED uses the cloneDeep function of Lodash. And when reading this Stackoverflow discussion, I'm not sure anymore...

So I did a little test:

[{"id":"d2411455.440f48","type":"inject","z":"2203d76d.b17558","name":"Inject buffer","props":[{"p":"payload"}],"repeat":"","crontab":"","once":false,"onceDelay":0.1,"topic":"","payload":"[1,2,3,4,5,6,7,8,9,10]","payloadType":"bin","x":1140,"y":160,"wires":[["8a4cf334.5e295","a7b006a0.b615b8"]]},{"id":"8a4cf334.5e295","type":"function","z":"2203d76d.b17558","name":"payload[5] = 88","func":"msg.payload[5] = 88;\nreturn msg;","outputs":1,"noerr":0,"initialize":"","finalize":"","x":1340,"y":160,"wires":[["31c21542.cce8ba"]]},{"id":"a7b006a0.b615b8","type":"function","z":"2203d76d.b17558","name":"payload[5] = 99","func":"msg.payload[6] = 99;\nreturn msg;","outputs":1,"noerr":0,"initialize":"","finalize":"","x":1340,"y":220,"wires":[["b21f19d5.3b95a8"]]},{"id":"31c21542.cce8ba","type":"debug","z":"2203d76d.b17558","name":"Uncloned msg","active":true,"tosidebar":true,"console":false,"tostatus":false,"complete":"payload","targetType":"msg","statusVal":"","statusType":"auto","x":1550,"y":160,"wires":[]},{"id":"b21f19d5.3b95a8","type":"debug","z":"2203d76d.b17558","name":"Cloned msg","active":true,"tosidebar":true,"console":false,"tostatus":false,"complete":"payload","targetType":"msg","statusVal":"","statusType":"auto","x":1539,"y":220,"wires":[]}]

One node changes the 5th position to 88, and the other changes the 6th position to 99. However both nodes log the same message, with both fields changed. So the buffer is passed by reference.

I'm VERY surprised, to say the least ...

Perhaps you can also put this kind of stuff also on your node's config screen? Just to avoid that you have to pass it via an input message, which makes flows more complex. In fact you could support both ways: specify it via the input message or via the config screen?

So glad that you suggest it yourself. Indeed you shouldn't integrate such kind of functionality into your node. This way users can decide themselves what to do with the mp4 chunk: store it in a file, store it in the cloud, ...

LOL :+1:

1 Like

Isn't there an option to disable the cloning with node.send(msg, false);?

It wouldn't apply for the pluggable message routing but I'm it really sure if this set of nodes is something where it would make sense to limit implementation options because of it.

I thought that this inducates whether the messages on the first wire should be cloned or not. But not sure...

Anyway we want to use all kind of third-party nodes, which might not take this into account...

Ah yes it might very well be like that.

Edit: confirmed.

1 Like