I think I found something related to why the presentation in browsers suddenly stops
When looking at the network traffic you can see how the segments are received. Then suddenly, a segment with zero content arrives (status cancelled). This makes the presentation in the browser to stop. A random number of hls.m3u8 sequences then follows until a new good sequence is received and the presentation recovers. Then a new zero content sequence arrives and it happens again...
Just a question: I tried to debug the messages coming out from the mp4frag node to get the correct link to test with VLC but it seems nothing is sent? Actually it works as before even if I remove the wire between the mp4frag and the ui_mp4frag nodes, How is this possible? And how to get the link to be used in VLC?
Good point, I think this is valid also for normal browsers running on other platforms, makes no sense to send live data unless someone is watching. When I run my browsers with a number of live camera views on my mac I notice the fan speeds up due to the increased cpu load (maybe mp4 would be better from cpu load perspective than the current http streams I use but my usb cameras does not provide h.264. The conversion I tried with ffmpeg works but does not really give me the same picture quality. So far at least)
To overcome this, I display my cameras in tabs (groups): when I deploy the tab the flow is started. When I close the tab, the stream is killed. Managed by "ui_control node" which detects when a group changes state.
I don't have any streaming running in the background.
That is indeed a nice solution. But if we are building a UI node, it would be nice IMHO to have this functionality in the node. Because that keeps our flows simple. I mean a checkbox "don't push data if widget not visible". Then you can keep sending messages (containing m4 fragments) to the UI node, but they won't be pushed to the dashboard when the widget is not visible at the moment.
Although I'm not sure (I'm not at my computer) whether it is possible currently - in the server side beforeEmit function - to stop a message from being send to the client. See the feedback from @dceejayin this discussion, where is advised to send a message containing e.g. null in those cases. But perhaps that is possible meanwhile...
Anyway perhaps I'm going too much of topic now. If my proposal makes sense (and need further discussion), we better split this into a separate topic...
Sorry for not being obvious, but I think there might be a little confusion about how the nodes are talking to each other. The mp4frag node receives buffer data and turns that into playable fragmented mp4 and then serves it to any player that can play it such as a browser implementation of hls.js or external app such as VLC. Video segments are NOT sent between mp4frag to ui_mp4frag using the builtin messaging system. The main communication that it give it just telling it where the playlist will be and when it is available. It is then up to the player to play the video or not by consuming it from the http server route where the hls.m3u8 is served.
As for the closing of un-watched videos, that is on my list and I was playing with that today. It will involve a combination of IntersectionObserver and document.visibilityState. I had it working pretty good by shutting off video that scrolled out of position or tab minimized and turning video back on that scrolled back into position when tab is visible. There problem is that the 3 browsers that I was testing on seemed to want something different from me. I am trying to find a way to please them all. Had to give up for tonight.
The green will definitely be from the cam video source itself. Since we are using -c:v copy, ffmpeg is copying the source and sending it out as is. If compatible, do you have tcp set for the rtsp input -rtsp_transport tcp?
I've done quite a lot with MQTT and security cameras to feed an AI sub-system. It works fantastic on i5 or better class systems, but multiple HD cameras choke IOT class hardware. It seems to be the issues in the networking layer where large backlogs build up.
For security images I don't think you want to "sacrifice quality" at all. If you remember the Micheal Brown riots in the US, looters were found "not guilty" as the VHS quality security camera images were not deemed "beyond a reasonable doubt" in image quality for identification.
I was finally able to push an update using a custom video player that relies on socket_io instead of hls.js over http.
It was tricky to incorporate with the lifecycle of node-red. Seems to have some issues connecting using a custom path and namespace without using forceNew: true. The trick for that was to use the ws:// url with forceNew: false as to not make a bunch of extra socket connections.
Added the feature that if document visibility changes, it will load or unload the video source.
Ran out of time to detect when a video scrolls out of view to toggle it via load/unload.
If you want to try the newer version,
cd .node-red
npm update
node-red-contrib-ui-mp4frag
There is a new option for picking your preferred video playback. Right now it is a crude array that is used to check your preferences in the listed order and see what is supported in the browser and available from the playlists given from the server payload. The following will try socket_io first, then hls.js, native hls (for mobile safari), then mp4 file if available:
["socket.io","hls.js","hls","mp4"]
If you only wanted to use hls.js and fallback to native hls:
["hls.js","hls"]
If you only wanted to use socket_io and have no fallback option:
["socket.io"]
I need to figure out how to make the option into a re-orderable list that can only add my selected options in the node-red settings panel.
node-red-contrib-mp4frag
The server side msg.payload passed to the ui player now includes the extra data for the front end to consume:
Amazing, exceptional picture quality, virtually no load on the cpu (is now down to 2,5%) in a RPi3B+, is that possible!!!? I don't have cameras with h.264 support but it must be great to stream video from them with this solution!!
Next thing on my mind is to feed the stream into an AI analyzer for object detection. I assume that the feed needs to be disassembled frame-by-frame somehow but it should not be a huge problem
Picture below is from Safari but it works just as good with Chrome (tested on mac and windows)
Great job, I notice much less delay with the live.
The RPI 3B consumes 20% of power with 3 CAM-IP in HD (640x360): I validate!
On the other hand, I don't know if it's a bug, but when I switch to PIP mode, I leave it open for several tens of seconds, and I return to normal mode, the streamming is stopped BUT an image changes every 6s about. You must press the arrow to resume normal playback.
Reading has fallen behind and is not catching up.
Is there a possibility to make up for the delay automatically, or with a command or an arrow "forward"?
I was trying out the pip in 3 browsers on mac: chrome, firefox, and safari. I can only duplicate your issue if I press the x button instead of the return to non-pip button. Are you pressing the X? That is your browsers behavior to trigger a pause if pressing the x, but simply return to regular viewing if pressing the other button.
That sounds great, but I hate to get excited until I know that your are passing the video through ffmpeg in exec node and node-red-contrib-mp4frag or is it just using the ui-mp4frag? For my personal experience running 14 cams, it seems that relaying the video on socket.io server actually runs much worse than using regular http via hls.js, but it is true that I am deliberately beating up the pi to see what breaks by having several browsers open at the same time live streaming 3 X 14 cams. I have read that socket.io has bad performace, so I may also add a regular websockets feature to move away from socket.io.
I will definitely need your help with that. I only hope that the editable list can do what I want. I really need an ordered editable list that will only use my available options. So, depending on the types of video we will support, there has to be the options for the user preference of which ones to try and in what order to try them, so order will be an important part of the list.
Right now, with the socket.io implentation, I keep track of the buffered segments by constantly feeding them to the mediasource. I have to monitor its duration and remove old media buffere so that your browser does not consume too much memory. The tricky part is determining how much buffer to keep to allow for smooth playback. This will eventually be a setting that you can tweak. Right now, I use a minimum of 10 seconds or 3.3 x the duration of the last segment received (3 segments duration plus 10%), whichever is greater. This seemed like a good balance for my videos not to freeze while waiting for the next segment. Also, I move the play head forward if it ends up being less than the current buffered video after removing old pieces. That's why you see the video update and skip forward while paused.