Hi, I just have to wait for some cars or people to pass by, it's so early morning yet
Still waiting for someone to pass by. But the quality is not so good. See the video below. Since the sequence of durations are just 1 second I believe the lag is very low. When I tried with the CPU solution (libx264) I had good quality bud very long lag. If I only could improve the quality of the mp4....
EDIT: And for some reason, the video stops in the browser after a short while and I have to make refresh, then it works again (sequences to ui_mp4frag never stops)
A car finally passed by, I would say the lag is around a few seconds. Too early to say if it is yet useful with the quality, stability, lag, why it stops in browser etc. Maybe the RPi3 is not powerful enough. I think I need to wait for Kevin's @kevinGodell judgement
EDIT: trying again to verify with libx264 (using CPU instead of GPU) does give very good picture quality, browser doesn't stop but sequence durations around 18 seconds that gives the lag
we are not far from the goal, sacrifice a little quality but with the lowest possible delay
That reminds me. I had to change that when I was trying to get h264 gpu decoding to work on the pi 4. I forgot what the error was, but it definitely was not allocated enough gpu memory by default. The maximum (that allowed the pi to boot) that worked for my pi 4 with 4gb ram was
gpu_mem=512 to allow my 14 rtsp ip cams to be decoded using gpu so that I could encode jpegs.
Thanks for doing all of the research. I have a vague memory of using gpu to encode video and having poor results. There seems to be limitations that sacrifice quality for speed. As for the duration of segment when encoding with libx264 on the cpu, we can tweak that. There are many options that can be changed, but yours is using the defaults. To see what you can tweak, we have to run a few ffmpeg commands to show whats available. Specifically, since you are decoding jpegs and encoding h264 mp4, run these commands separately:
ffmpeg -demuxers | grep jpeg ffmpeg -decoders | grep jpeg ffmpeg -encoders | grep 264 ffmpeg -muxers | grep mp4
To see details about a specific a demuxer, decoder, muxer, encoder that you found in the list from running the previous commands, run a command like this:
ffmpeg -h demuxer=mjpeg ffmpeg -h decoder=mjpeg ffmpeg -h encoder=libx264 ffmpeg -h muxer=mp4
And to clarify a bit, the demuxer/muxer is the format when using the
-f param and the decoder/encoder is when using the
I made a little progress last night using socket.io to transfer video from server to client using a very slim video player that deliberately has no extra features besides playing live video. I had a difficult time integrating socket.io server and client with node-red. I ultimately had to use
path options to not conflict with socket.io being used by
node-red-dashboard. And there was a strange issue that my socket.io client side would not connect on the first page load without using
forceNew: true, but yet it could connect if I navigated to a different page/tab on the node-red server and navigated back to the video page. At least the results were promising as the delay was respectable, based on the duration of segments.
And as a reminder, since this thread has become long and the details have been lost, the minimum real-time delay of video running in the browser will always be based on the segment duration. So, a segment duration of 10 seconds will mean that the real-time video playback will be delayed by that much time plus the time to transfer the video from server to client and start playing it.
Dear Kevin, to start, this is what I have, see below
Also just would like to mention that the streaming in the browser is very "sensitive", wrong word maybe, to stop if focus is lost. I mean if the browser gets minimized or if you click anywhere else on the desktop so that the browser is not in the foreground. A browser refresh is needed to get streaming presented again
ffmpeg -demuxers | grep jpeg
D jpeg_pipe piped jpeg sequence
D jpegls_pipe piped jpegls sequence
D mjpeg raw MJPEG video
D mjpeg_2000 raw MJPEG 2000 video
D mpjpeg MIME multipart JPEG
D smjpeg Loki SDL MJPEG
ffmpeg -decoders | grep jpeg
VFS..D jpeg2000 JPEG 2000
V....D jpegls JPEG-LS
V....D mjpeg MJPEG (Motion JPEG)
V....D mjpegb Apple MJPEG-B
V..... smvjpeg SMV JPEG
A....D adpcm_ima_smjpeg ADPCM IMA Loki SDL MJPEG
ffmpeg -encoders | grep 264
V..... libx264 libx264 H.264 / AVC / MPEG-4 AVC / MPEG-4 part 10 (codec h264)
V..... libx264rgb libx264 H.264 / AVC / MPEG-4 AVC / MPEG-4 part 10 RGB (codec h264)
V..... h264_omx OpenMAX IL H.264 video encoder (codec h264)
V..... h264_v4l2m2m V4L2 mem2mem H.264 encoder wrapper (codec h264)
V..... h264_vaapi H.264/AVC (VAAPI) (codec h264)
ffmpeg -muxers | grep mp4
E mp4 MP4 (MPEG-4 Part 14)
I understood what you meant, but thanks for clarifying. Can I ask what browser that is doing that? It is probably considered to be a feature. I can eventually detect that but am currently focused on the socket.io stuff right now. We will definitely handle those playback issues by listening to various browser states, etc.
No problem at all, I understand, no rush
I tried it with Chrome and Safari on mac, with Chrome and Edge on Windows and I would say the behavior is the same for all on both mac and Windows
If I have the browser open and prepared, I see the text "Video playback ready". When I then start the stream it sometimes works but sometimes changes to "Video playback error" and I have to make a browser refresh to make it work again, it then starts streaming correctly
If you see that, there should also be a console.error message in the browser stating the issue, I hope.
According to my readings on one of my camera, I see sequences of 1 to 2 seconds in HD and 5 to 10s in FHD.[beta testing] nodes for live streaming mp4
I can not see in the parameters of the Cam where this sequence time setting is located. Here is an example of camera config:
@kevinGodell You said your cameras can't go below 10s. Do you see a possibility of adjustment on my Cam?
Iframe interval. Sometimes it is called GOP. It seems to be set to the lowest value of 2. My lowest quality cam is also set to 2, but never pushes out segments less than 10, while my other good cams honor the setting. It might be a hardware issue. Also, my lib for parsing fragmented mp4 does not support h265 and I am not sure if that can be played in the browser without re-encoding.
I managed to lower the iframe interval to 1 By switching the "Smart Encode" setting from H265 to Closing. I now have a 5s offset from the "Exec + pipe2jpeg" node. It's better than 20s already, but not obvious with PTZ controls.
5s seconds gap between the 2 nodes: pipe2jpeg and mp4defrag
It's a Chinese camera and I think the H.265x is a fake, because on the ICSEE app I am using H264, and your library is decoding the flow for me because I can see it.
5s delay is actually really good when using hls.js to play the video. That is near the best it could be. On your admin screen beneath the mp4frag node, what does it show as the segment duration?
Maybe this is already "breaking in thru open doors" but I wanted to share anyway
This works as well. Sending the sequences over the network via mqtt. I also tried receiving on another computer and it works fine! Using mqtt might be an alternative if you are looking for distribution of camera views. If you have multiple clients that should share the same cameras. Furthermore if you use a cloud service like CloudMQTT, you could have secure access to your cameras from outside your home network (if you do not want to setup a vpn server)
EDIT: It works using CloudMQTT using SSL!!! Sending sequences I can't see that mqtt adds any delay. With a simple camera 640x480 on a RPi3 I have total delay of approx 3 seconds when viewing on the Jetson Nano or on the same RPi. The whole transport over mqtt is neglectable it seems
This is all that is needed on another machine
here is under the mp4frag when streamming FHD cam :
the iframe interval by 1 is respected
5s delay compared to VLC
That is pretty neat. I have not yet played with mqtt, but a little research told me it uses tcp, which is important so that the data arrives in the correct order. I will have to try that out some day if I setup a 2nd node-red server.
It seems that your VLC app directly connected to the cam's rtsp url. Can you try connecting VLC to your hls.m3u8 playlist and see how that compares to the browser. That would be a more accurate comparison of hls.js in the browser vs VLC app since they will use the same source.
I am testing my socket_io implementation and the delay is about 2.5 seconds behind real-time. This is due to the minimum segment duration that my cam outputs is 2 seconds. I am trying to clean the code up enough to make it available soon for testing.
Kevin, perhaps you have thought about it alread. The main problem with pushing images to the dashboard that I had in the past: it kept pushing images to my smartphone even when the tabsheet with the camera player wasn't visible. Not good for my walket
wallet I presume?
Yes. My fingertops are too big for small smartphone keyboard keys...
When pushing images, I think that dashboard client-side widget should let the server-side know when it becomes (in)visible. So that the server kan start/stop throttling...