[beta testing] nodes for live streaming mp4

Thanks for doing the research. I would love to see the source code where that option lives. I'll put that on my list of things to do.

That might actually apply only to function nodes. Not sure about custom nodes.

Function node source: https://github.com/node-red/node-red/blob/master/packages/node_modules/%40node-red/nodes/core/function/10-function.js

P.S. It's quite frustrating that you cannot use GitHub source code search for Node-RED's code. My guess is it's ignoring the code as it resides under node_modules which would in most cases improve the search results but not in this...

You kind of skipped the part about "countless libraries". There seems to be a nice overview of options here: 15 Javascript Libraries for Working with HTML5 Video – Bashooka

Maybe some of them would be suitable but not as bloated?

1 Like

I am definitely not married to hls.js, although it does provide smooth video. If only I had time to try them all, I would. If they can read from my hls playlist (hls.m3u8), then one day we can setup a test with some of them side by side on the same basic html page outside of node-red and see how they perform.

With luck some fast fingered/witted forum member will try them out. :slightly_smiling_face:.

It won't be me though, I've yet had time to even test your hugely popular demo flow. I'm just lurking here as I'll most certainly be making use of this stuff later on. My main season for camera stuff is summer when I'm "on site" with my cottage security camera setup.

1 Like

I had to push some breaking changes to node-red-contrib-mp4frag by renaming a setting internally so that the code makes more sense when I was stubbing out for socket io integration.

The changes to node-red-contrib-ui-mp4frag were also updated to still receive a path to the hls playlist or an object that has hlsPlaylist as a property. This is so the server can pass multiple video sources to the client and then the client side can choose which source to use. {hlsPlaylist: '/path/to/hls.m3u8', socketServer: '/path/to/socket.io', mp4File: '/path/to/video.mp4'}

After pushing the code, I ran npm update in .node-red to receive my own updates and my node-red server survived! You will have to re-save any node-red-contrib-mp4frag nodes to fix the setting change.

Just a minor I hope,,,

I think sometimes/most of the time the video presentation can stop when the browser is minimized or if selecting another tab. If this happens, the play control does not work to recover the video, I have to refresh the browser

At the same time, the mp4frag keeps counting sequences so this seems to work but frozen video (last shown image) is shown in ui_mp4frag

Best regards, Walter

1 Like


Am I looking in the right place? the duration of my segments are a few milliseconds. You say they must be 10s minimum (but I also have a cheap camera).

1 Like

I believe Kevin meant that one of his cameras has minimum segment duration of 10s, not that it's what it should be on all cameras for things to work.

1 Like

I meant that you can view the content of the hls playlists in the browser. For one of mine, the address is 192.168.1.85:1880/mp4frag/a/hls.m3u8.txt

ok, i got it :
in FHD :
#EXTM3U
#EXT-X-VERSION:7
#EXT-X-TARGETDURATION:10
#EXT-X-MEDIA-SEQUENCE:0
#EXT-X-MAP:URI="init-hls.mp4"
#EXTINF:5.423000,
hls0.m4s
#EXTINF:9.993000,
hls1.m4s
#EXTINF:10.031000,
hls2.m4s

in HD :
#EXTM3U
#EXT-X-VERSION:7
#EXT-X-TARGETDURATION:2
#EXT-X-MEDIA-SEQUENCE:0
#EXT-X-MAP:URI="init-hls.mp4"
#EXTINF:1.000000,
hls0.m4s
#EXTINF:1.000000,
hls1.m4s
#EXTINF:1.000000,
hls2.m4s
#EXTINF:2.049000,
hls3.m4s

1 Like

Yeah, some browsers don’t let processes run in the background or in another tab, which is good sometimes. I have to wire in some events to listen for when the video player is not visible, such as scrolled out of view or tab minimized, etc, so we can save bandwidth. And then to detect when visible to turn back on. It can be done.

Yes. That is right. If you want the video playback to be as close to real-time as possible, you will have to use the shortest duration segments that your camera will allow. Of course, there is a small hit in size by packaging more segments of shorter duration, but that is the price we pay. Also, hls.js has to make more http requests to the server if the segments are shorter.

@kevinGodell

Hello,
I think I have faced the "famous" audio problem,,,

I have a USB camera that I successfully have managed to stream "as a netcam" using another great software (mjpg-streamer). When grabbing the stream using ffmpeg, everything seems to work fine but it is not shown in the ui_mp4frag. I do get the error message seen below from the the mp4frag node. It counts the sequences very quickly but I guess it's because no audio is provided in the stream (the camera does not have audio). I think it would be cool to be able to stream a usb camera like that and create a mp4 playlist. Below my ffmpeg command in the exec node. Do you maybe see something I'm missing?

EDIT: Just a question regarding the playlists. A typical url looks like

/mp4frag/e1444359.d23bb/{resource}

Is that written/updated frequently to disk somewhere or is anything else written to disk during the processing? I mean, is there anything to worry about since I use SD-Cards?

Best regards, Walter

ffmpeg -loglevel quiet -i http://192.168.0.237:8888/?action=stream -an -c:v copy -f mp4  -movflags +frag_keyframe+empty_moov+default_base_moof pipe:1

you will not be able to use -c:v copy unless the video input is already encoded as h264. Most likely, the webcam is in some format that will have to be encoded to be compatible. If you have command line access to a system that has ffmpeg/ffprobe, use ffprobe on your video url to let it show you the details and please post back.

No files are ever written with my video streaming stuff. It only uses memory.

3 Likes

Hi Kevin,

Did some more trial & error testing. Kind of works but since, I believe, I have to use libx264, the cpu in the poor RPi3 get's very hot

Running ffprobe on the stream (from mjpg-streamer that streams http from the usb cam) gives:

Input #0, mpjpeg, from 'http://192.168.0.236:8889/?action=stream':
Duration: N/A, bitrate: N/A
Stream #0:0: Video: mjpeg (Baseline), yuvj422p(pc, bt470bg/unknown/unknown), 640x480, 25 tbr, 25 tbn, 25 tbc

Running ffprobe directly on such a usb camera device gives this:

Input #0, video4linux2,v4l2, from '/dev/video0':
Duration: N/A, start: 1955.439460, bitrate: 199065 kb/s
Stream #0:0: Video: rawvideo (YUY2 / 0x32595559), yuyv422, 1920x1080, 199065 kb/s, 6 fps, 6 tbr, 1000k tbn, 1000k tbc

I guess those cameras are just too limited? Was a test just to see what is possible...

Using the following "kind of works" but the result is not very useful (long delays in providing sequences makes the video lag behind on a RPi3). Running too long and the RPi3 quickly gets overheated...and stops responding

ffmpeg -i http://192.168.0.236:8889/?action=stream -pix_fmt yuvj422p -c:v libx264 -f mp4 -movflags +frag_keyframe+empty_moov+default_base_moof pipe:1

image

Don't give up hope just yet. It is still possible that gpu accelerated encoding might be available on your system. please go to the command line again and run:
ffmpeg -encoders | grep 264

On my pi, I get:

 V..... libx264              libx264 H.264 / AVC / MPEG-4 AVC / MPEG-4 part 10 (codec h264)
 V..... libx264rgb           libx264 H.264 / AVC / MPEG-4 AVC / MPEG-4 part 10 RGB (codec h264)
 V..... h264_omx             OpenMAX IL H.264 video encoder (codec h264)
 V..... h264_v4l2m2m         V4L2 mem2mem H.264 encoder wrapper (codec h264)
 V..... h264_vaapi           H.264/AVC (VAAPI) (codec h264)

Some things that would affect your cpu/gpu would be the frame rate and dimensions(w x h) of the input and output.

As for the segment duration, being that you are encoding the video, you can use many settings to affect the segments, including duration which can give you smaller pieces and delay.

Something that looks odd to me, is your mjpeg-streamer giving your a 25 fps video from the source that is only 6 fps? If so, that could be contributing to the problem.

Much to test.

Dear Kevin, thanks for your kind & encouraging words!

In my RPi3, I get the same:

V..... libx264 libx264 H.264 / AVC / MPEG-4 AVC / MPEG-4 part 10 (codec h264)
V..... libx264rgb libx264 H.264 / AVC / MPEG-4 AVC / MPEG-4 part 10 RGB (codec h264)
V..... h264_omx OpenMAX IL H.264 video encoder (codec h264)
V..... h264_v4l2m2m V4L2 mem2mem H.264 encoder wrapper (codec h264)
V..... h264_vaapi H.264/AVC (VAAPI) (codec h264)

For the frame rate, provided by mjpg-streamer, I think that camera somehow was configured earlier to provide fps at maximum, at least it looks good and fast when just looking at the stream in a html view, fast & fine updates

Then the camera where I got 6 fps was a new one, the same type, just picked out of the box, connected it and run ffprobe. I just wanted to verify what such a camera provided. Just trying the actual camera used by mjpg-streamer now gives 30 fps:

Input #0, video4linux2,v4l2, from '/dev/video2':
Duration: N/A, start: 111750.169114, bitrate: 147456 kb/s
Stream #0:0: Video: rawvideo (YUY2 / 0x32595559), yuyv422, 640x480, 147456 kb/s, 30 fps, 30 tbr, 1000k tbn, 1000k tbc

mjpg-streamer is capable to deliver 25 fps from the same camera

Maybe I have hope? The camera is said to provide both MJPEG and YUY2, see below

Best regards, Walter

1 Like

Time for an update. I managed to make it work using the GPU :champagne: :champagne:

  1. Maybe not necessary but I decided to re-build ffmpeg from scratch to get the latest version and also to be sure it really was built with gpu support. Following this guide here:
    (RPi) Compile FFmpeg with the OpenMAX H.264 GPU acceleration · legotheboss/YouTube-files Wiki · GitHub

Except that I used these configure parameters (to link the atomic library which was needed in my case):

./configure --extra-ldflags="-latomic" --arch=armel --target-os=linux --enable-gpl --enable-omx --enable-omx-rpi --enable-nonfree

  1. Edit /boot/config.txt and change to at least gpu_mem=128
    Note: maybe this would have solved my issue, so try this first before re-building the whole ffmpeg package

  2. Reboot

  3. Checking versions

pi@raspberrypi:~ $ ffmpeg -encoders | grep 264
ffmpeg version N-99578-gaf701196ec Copyright (c) 2000-2020 the FFmpeg developers
built with gcc 8 (Raspbian 8.3.0-6+rpi1)
configuration: --extra-ldflags=-latomic --arch=armel --target-os=linux --enable-gpl --enable-omx --enable-omx-rpi --enable-nonfree
libavutil 56. 60.100 / 56. 60.100
libavcodec 58.111.101 / 58.111.101
libavformat 58. 62.100 / 58. 62.100
libavdevice 58. 11.102 / 58. 11.102
libavfilter 7. 87.100 / 7. 87.100
libswscale 5. 8.100 / 5. 8.100
libswresample 3. 8.100 / 3. 8.100
libpostproc 55. 8.100 / 55. 8.100
V..... h264_omx OpenMAX IL H.264 video encoder (codec h264)
V..... h264_v4l2m2m V4L2 mem2mem H.264 encoder wrapper (codec h264)
V..... h264_vaapi H.264/AVC (VAAPI) (codec h264)

  1. Running test

The command line:

ffmpeg -i http://192.168.0.236:8889/?action=stream -pix_fmt yuvj422p -c:v h264_omx -f mp4 -movflags +frag_keyframe+empty_moov+default_base_moof pipe:1

And it works, it's now using the GPU!!! The total CPU load has gone down and is now just around 20% on average and the CPU/GPU temperature seems to stabilize around 58 degrees celsius

The sequence durations are now steady 1 second

[h264_omx @ 0x1b96580] Using OMX.broadcom.video_encode

Output #0, mp4, to 'pipe:1':
Metadata:
encoder : Lavf58.62.100
Stream #0:0: Video: h264 (h264_omx) (avc1 / 0x31637661), yuv420p, 640x480, q=2-31, 200 kb/s, 25 fps, 12800 tbn, 25 tbc
Metadata:
encoder : Lavc58.111.101 h264_omx

frame= 59 fps= 14 q=-0.0 size= 48kB time=00:00:02.28 bitrate= 173.3kbits/s speed=0.543x

etc, etc

The picture quality is however not as good now, don't know why but this is how it looks like. The picture looks distorted

image

2 Likes

Does this mean that there is no longer a delay when watching the video compared to real life? (Or a little less than in the last update where we arrived at 20s of lag)

Great job once again!

2 Likes