[beta testing] nodes for live streaming mp4

Ok, cool. I just made sure that it is still here.

I have some issues & questions I just wanted to bring up. I might have misunderstood the foreseen functionality or missed to install some pre-requisite. All related to the video.mp4 stream. Using your latest nodes

  • when I connect a consuming process like ffmpeg or GStreamer trying to stream the video.mp4 stream over the network (from one RPi to another) it looks like the "production" of sequences is halted/stopped. Like in the picture below, it stopped/is hanging at sequence 861 and it does not stream. To make it continue, I opened the stream also in VLC (from my mac via the network), then sequences started to be generated again but only to VLC

  • if I connect the consumer again while VLC is connected, the sequences and presentation in VLC stops. If I restart playing the playlist in VLC the sequences starts again and the presentation in VLC continues

  • in one of the RPi's i get the following error messages. I have no idea why it comes here, I have the latest ffmpeg built & installed but I do not have GStreamer installed in this RPi. I think the error comes from ffmpeg but I am lost

[mov,mp4,m4a,3gp,3g2,mj2 @ 0x69100e00] stream 0, offset 0x8b4: partial file
[h264 @ 0x69108380] Invalid NAL unit size (-1337227152 > 53658).
[h264 @ 0x69108380] Error splitting the input into NAL units.
[h264 @ 0x6910c290] Invalid NAL unit size (906226046 > 18275).
[h264 @ 0x6910c290] Error splitting the input into NAL units.
[h264 @ 0x69157c90] Invalid NAL unit size (-1711926083 > 4610).
[h264 @ 0x69157c90] Error splitting the input into NAL units.
[h264 @ 0x691827d0] Invalid NAL unit size (-1287465898 > 6519).
[h264 @ 0x691827d0] Error splitting the input into NAL units.

[{"id":"828e8270.1fe98","type":"inject","z":"1248a99.76fd656","name":"Start stream","topic":"","payload":"true","payloadType":"bool","repeat":"","crontab":"","once":false,"onceDelay":"1","x":180,"y":120,"wires":[["a787d4bb.765908"]]},{"id":"73a55c20.c2d044","type":"inject","z":"1248a99.76fd656","name":"Stop stream","topic":"","payload":"false","payloadType":"bool","repeat":"","crontab":"","once":false,"onceDelay":0.1,"x":180,"y":166,"wires":[["efacfa3d.1f0cf8"]]},{"id":"efacfa3d.1f0cf8","type":"switch","z":"1248a99.76fd656","name":"","property":"payload","propertyType":"msg","rules":[{"t":"true"},{"t":"false"}],"checkall":"true","repair":false,"outputs":2,"x":550,"y":120,"wires":[["87c0c02b.c4153"],["7dc999c4.031a28"]]},{"id":"7dc999c4.031a28","type":"function","z":"1248a99.76fd656","name":"stop","func":"msg = {\n    kill:'SIGKILL',\n    payload : 'SIGKILL'  \n}\n\nreturn msg;","outputs":1,"noerr":0,"x":550,"y":170,"wires":[["87c0c02b.c4153"]]},{"id":"87c0c02b.c4153","type":"exec","z":"1248a99.76fd656","command":"ffmpeg -i http://f24hls-i.akamaihd.net/hls/live/221147/F24_EN_HI_HLS/master_2000.m3u8 -c:v copy -f mp4  -movflags +frag_keyframe+empty_moov+default_base_moof pipe:1","addpay":false,"append":"","useSpawn":"true","timer":"","oldrc":false,"name":"France 24","x":739,"y":140,"wires":[["e3eb5609.0cf228"],[],["e3eb5609.0cf228"]]},{"id":"a787d4bb.765908","type":"delay","z":"1248a99.76fd656","name":"","pauseType":"rate","timeout":"5","timeoutUnits":"seconds","rate":"1","nbRateUnits":"1","rateUnits":"second","randomFirst":"1","randomLast":"5","randomUnits":"seconds","drop":true,"x":360,"y":120,"wires":[["efacfa3d.1f0cf8"]]},{"id":"e3eb5609.0cf228","type":"mp4frag","z":"1248a99.76fd656","name":"","hlsPlaylistSize":4,"hlsPlaylistExtra":0,"basePath":"france24","x":970,"y":140,"wires":[["19735390.8268ec"]]},{"id":"19735390.8268ec","type":"ui_mp4frag","z":"1248a99.76fd656","name":"","group":"1faea6b.a534559","order":0,"width":5,"height":4,"readyPoster":"https://raw.githubusercontent.com/kevinGodell/node-red-contrib-ui-mp4frag/master/video_playback_ready.png","errorPoster":"https://raw.githubusercontent.com/kevinGodell/node-red-contrib-ui-mp4frag/master/video_playback_error.png","hlsJsConfig":"{\"liveDurationInfinity\":true,\"liveBackBufferLength\":0,\"maxBufferLength\":5,\"manifestLoadingTimeOut\":1000,\"manifestLoadingMaxRetry\":10,\"manifestLoadingRetryDelay\":500}","restart":"true","autoplay":"true","players":["socket.io","hls.js","hls","mp4"],"x":1200,"y":140,"wires":[[]]},{"id":"1faea6b.a534559","type":"ui_group","z":"","name":"MP4","tab":"562cdfc1.4448","order":1,"disp":false,"width":"6","collapse":false},{"id":"562cdfc1.4448","type":"ui_tab","z":"","name":"MP4","icon":"dashboard","disabled":false,"hidden":false}]

@krambriw

So far, I am unable to reproduce any issues reading the video.mp4. For testing, I found out something pretty neat for me. I was able to use firefox in my amazon firestick and point it to the video.mp4 and it has been playing on the TV for atleast 4 hours now without issue. During that same time, I have VLC on mac consuming another stream at video.mp4 and also no trouble. One of those is on segment 3994 and the other on segment 6104. I thought that one of those things surely should have crashed by now, but it is running great.

One thing I noticed is that the ffmpeg command needs a little tweak to smooth out the video. Add -re in front of the -i so that it reads at realtime instead of going too fast.

Latest updates:

Picture in picture mode works good now for chrome and edge on windows, and chrome and safari on mac. For whatever reason, firefox does not expose any value that I can access to check if a video is in pip mode. firefox not supported for handing the unloading of video while in pip mode.

In the settings, you can pick if the video should automatically play after loading. The automatic unloading when a video player is not visible can be turned off if you want to keep the stream running. Most likely, you will keep both of these options set to true.

great I test it now

@krambriw I updated node-red-contrib-mp4frag to change the way it handles the long running http connections for the video.mp4. I think there was the possibility of the internal pipe getting clogged if a connected http client did not consume the data in a timely manner. Please let me know if it happens again.

1 Like

@kevinGodell Thank you, I see a nice improvement in that now the "creation" of sequences is consistently running. Earlier I saw that it could stop and halt when GStreamer connected as consumer. I can also stop/start the stream without any problems now. Earlier I sometimes had to make a small change in the mp4frag node and deploy to make it work again, I guess this reseted the node

As I mentioned, I use OpenCV/GStreamer as stream consumer creating frames that are published to a MQTT broker (see the Python script below)

The only remaining problem I face now I would like to describe as follows:

  1. I start the streaming, sequences are correctly being generated
  2. I connect OpenCV/GStreamer as consumer using the python script
  3. After a short waiting time, GStreamer starts consuming the stream, showing the info below and then printing the actual fps while reading the stream
pi@raspberrypi:~ $ python3 vtomqtt.py
TrtThread: start running...waiting for images to analyze
Connected to broker: 0
Subscribed to topics: None 1 (0,)
[ WARN:0] global /home/pi/opencv/modules/videoio/src/cap_gstreamer.cpp (898) open OpenCV | GStreamer warning: unable to query duration of stream
[ WARN:0] global /home/pi/opencv/modules/videoio/src/cap_gstreamer.cpp (935) open OpenCV | GStreamer warning: Cannot query video position: status=1, value=8183, duration=-1
0.24713212874285384
0.7901653528141859
1.3141440563213804
1.8221518333022688
2.2937238728766243
2.6675074147049576
3.1046987059205415
3.528966821292124
3.920450647447993
4.281099055348626
4.643837616118495
4.9845879917273095
5.310136522487403
5.6189369901121085
5.828258424394456
6.110369067350295
6.3703615323792935
6.618304478935738
6.857794520723597
7.074587103923507
7.29448328073534
7.497591329973566
7.6960518768770045

  1. The script continues successfully and publishes frames to the MQTT broker topic. Frames are successfully received by other SW's using those for analysis

  2. I then stop the script and restart it again. Now the script takes very long and may start consuming the stream again. If I do this more times, I would say the script never succeeds to start consuming. If I first restart the stream so that the mp4frag gets reseted and starts creating sequences again, the script always starts consuming as the first time

I guess this is some behavior of GStreamer, maybe needs a "fresh" pipe or something so I can understand the difficulty to find the issue. If I try the same with VLC it works every time. I can stop/start VLC as many times I like without having to restart the stream

If you find or suspect why this "increased delay" or why a restart of the stream "helps" would be great. Below my script for reference

Best regards, Walter

vtomqtt.txt (2.9 KB)

EDIT: If I instead try to read directly from the original broadcast stream (France24), http://f24hls-i.akamaihd.net/hls/live/221147/F24_EN_HI_HLS/master_2000.m3u8, I can stop and restart the script without any problem and the capturing also starts much quicker. So there must be some difference between the original stream and the stream provided by mp4frag that the script/GStreamer doesn't like or have problems to handle

EDIT AGAIN: When streaming, stability feels rock-solid!!

Firefox devs were quick to dismiss the bug report. I will not be able to support the correct unloading of videos that are playing as a pip. The current fix is to use chrome, edge, or safari.

3 Likes

For the ui-mp4frag, you can now change the threshold setting. This is the percentage of visibility of the video player before triggering loading/unloading. For example, if set to 0.3 and the video player is scrolled partially out of view, it will have to be atleast 30% visible before it will load. My wife did not like the hard coded setting of 50%.

2 Likes

For as long as I have been working on ways to make live mp4 video streamable, there has always been the issue of realtime delay. I just stumbled across some ffmpeg parameters that allow me to shorten the duration of mp4 media segments which reduces my live video latency relayed from my ip cams.

Normally, we would have to re-encode the video in order to change media segment durations, which would cause a very high cpu load. Apparently, somebody patched ffmpeg 2 years ago and I just found out last night. Of course, it is not documented on the website.

Before using these other params, I must warn you that it will put a higher load on the the nodejs process along with giving the socket.io or http connection more work to do, since the mp4 will be broken into smaller files, which means there will be more of them to process.

We are changing the movflags and adding min_frag_duration:
-movflags +frag_every_frame+empty_moov+default_base_moof -min_frag_duration 500000

-movflags +frag_every_frame causes ffmpeg to package each frame into its own media segment, which is helpful, but not practical for our streaming. min_frag_duration 500000 tells ffmpeg to limit the duration to be no less than 0.5 seconds, which keeps the segment at a manageable size. So now, I just shaved about 2 seconds realtime delay off my ip cams at my house.

2 Likes

In addition, I think it runs much smoother now also. I'm streaming a pre-recorded video and it runs really good

1 Like

Interesting to see is also that the duration of sequences is stable exactly at 1.000 second. Without the changing of movflags etc the duration was altering between 1 an 2 seconds when I streamed the same pre-recorded video. I do not see any change (increase) in cpu load, it is stable at 7%

Oops. That 1.0 was a hard limit I set in my other lib. I have to push that update when i get home later so it can show the correct duration.

:heart_eyes:

Just pushed changes to the mp4frag lib, which is what node-red-contrib-mp4frag wraps. It now should show the correct segment duration if < 1. In the past, based on real world examples, I never ran across segments less than 1 second duration, so I set the hard limit and it has been like that for years with no complaints.

There will be some tweaks I have to make, since using these artificially smaller segment durations, they may not contain an iframe. So, the video in the browser seems to take an extra couple seconds to start playing (I suspect the media source in the browser needs enough video possibly including the iframe before it knows what to do with it), but then seems to keep up close to real time video playback.

1 Like

Is working now, segment durations 0.4-0.6 seconds. Smooth & nice viewing as before both in Safari & Chrome on my mac (with macOS Big Sur)

A guts feeling is that the cpu load now has increased a bit. When using hls, it is up some 4-5%. Instead of 7% it is more close to 11-12%. I feel socket.io is now also causing a cpu load increase like 1-2%. The new version seems, compared with the previous version, adding some percentages of cpu load. As I said, just a guts feeling, might be just me checking briefly with two equal RPi3B+ running the same except for one that has your previous version installed. Without measuring more exactly, hard to tell

In the video below, I stream the same pre-recorded video from my network using hls and Chrome. The view on the left side is using your latest nodes

I also noticed a higher cpu load when using smaller segments. Previously, my server side was processing about 1 segment every 2 seconds. Now, it is processing 2 segments every second, which is a rate increase x 4. For my 14 video streams, this increased the server load for me a little, although it was an acceptable amount considering the benefits of being able to see the live video from my ip cams much sooner. I still need to move beyond socket io and implement regular websockets. At that point, we might have efficient code.

1 Like

Yes, understood, is working good, just wanted to mention

1 Like

I hope I didn't sound defensive, just simply trying to agree with you. I definitely need honest feedback in order to refine and improve the code. Thanks for all the testing. :crazy_face:

1 Like

No problem at all! It's so great what you have made, I'm just happy to use it and to help with whatever I'm able to :+1: :+1: