Ui_mp4frag for DB2 - how to continue

Hi folks,

I have been implementing some pull requests for dashboard D2, but now I finally would like to continue migrating some old ui nodes.

I have some questions about another discussion:

To be able to migrate my own dashboard to D2, this node has the top priority for me. While digging through the code of the old ui node, some questions popped up in my head. So hopefully our video expert @kevinGodell can help me with that. And others are of course welcome to share how they use this node, which helps me to understand how the node should behave.

The old ui-mp4frag node supported following modes:

  • Mp4 files
  • M3u8 playlist live streaming via hls.js
  • M3u8 playlist live streaming via native hls
  • Pushing fragments via websocket (socket.io)

I think the best name for this node would be node-red-dashboard-2-ui-video because it is a general video playing node. Moreover it might be that some less-technical folks have no idea what mp4-frag is meaning.

At the time being I was very in the pushing-data mood: you simply inject messages with video fragments into your node, and it magically appears in your dashboard. But I have no plans to implement pushing via socket.io in this node node, for a couple of reasons:

  • Although it seems the easiest way to implement this with a simple flow, that is not the case. You don't want to push lots of data to all your smartphones. So you need to determine if the clients want the data, and only push data when required. This makes everything more complex.
  • There is quite a lot of code required, which makes the node difficult to maintain.
  • I have not enough free time to implement and keep on maintaining such an implementation.

There are some features in the codebase of the old ui node that are not quite clear to me:
I was looking at the code of your ui node for mp4frag, and I have a couple of questions.

  1. Why is it needed to (un)load only after a timeout instead of immediately?
  2. Why are intersection changes tracked?
  3. Why do you need a visibility change handler?
    EDIT: this is related to the "unload video source if player becomes hidden" and "threshold percentage used to determine if player is hidden" features of the old node.
  4. When unloading a video, why are the following statements needed?
    videoElement.removeAttribute('src');
    videoElement.load();
    
  5. When unloading a video, is it requried to call explicit hlsJsPlayer.destroy()? Because at first sight hls.js seems to be switching between 2 different streams without it.

And not sure whether "Picture in Picture" and "Full screen" modes are much being used...

Thanks!!
Bart

3 Likes

I'll start by acknowledging that the code is horrible, which is one reason I abandoned it with the plans to take a different route (which never materialized). Much of the hackery was to deal with supporting various browsers and their light differences.

Looking at the source, I can't say why it is needed, only that it must have been needed based on my testing. It may have been to catch edge cases where a previous video was already being loaded and then receiving a new msg with a new source was going to trigger a new video source. Or maybe it was to give a little lead time in case you scrolled the video out of view triggering the intersection observer. Yes, that seems to ring my memory. In case of fast scrolling a video out of view and then back in, there was no need to unload the video source and then re-load it, maybe. It probably is not necessary for what you are working on.

Somebody had requested that the videos stop loading if hidden. I took it one step further and have the video unload when scrolled out of view and reload when scrolled back.

Similar answer to previous question, but activating when going to a different browser tab to trigger the video to unload.

Well, the code was written a long while ago. Handling video in browsers was very painful to make it work reliably cross-browser. It must have been necessary at that time, but maybe not today. (correction, still needed for reliably reloading sources).

I will have to get back to you on that. I remember difficulties when trying to use hls.js' built-in cleanup code when trying to load new video sources. At the time, it was more reliable just to completely destroy everything and start a new video player. Oh, just checked my newer source code and it seems I was still calling .destroy() on the hlsjs instance and removeAttribute('src') on the video element.

#unloadVideoSrc() {
            if (this.#hlsJsInstance?.destroy) {
                this.#hlsJsInstance.destroy();
                this.#hlsJsInstance = undefined;
            }
            if (this.#video?.src) {
                this.#video.removeAttribute('src');
                this.#video.load();
            }
        }

I remember some browser compatibilities with that, in particular, firefox refuses to give a certain piece of data when a video element goes full screen. What I was doing was causing the other video feeds to unload when you were in full screen, since the other video players would not be visible.

In the recent past, I had been working on making a video player to replace the old, but ran into problems. One major problem is that apple decided to make their own standards for video playback on their latest browser. I ditched the idea of making my own socket io video implementation due to that reason, but hls.js does support the latest ios browser requirements for handling video chunks.

Another consideration for what I started recently and subsequently abandoned was supporting jpeg playing in the video player (using the video element poster and repeatedly fetching an updated jpeg and then loading it as a blob with window.URL.createObjectURL(blob)) . I was trying to build the player so that you could give it 3 video feeds, 1 high stream hls, 1 low stream hls, and then some type of jpeg source. The plan was that the video sources could be switched on the client side by pressing the applicable button (or some other logic). The main difficulties I had with that was trying to get things sized properly in db2 to properly fit the various layouts. I kinda ran out of time and gave up.

2 Likes

I think those are correct behaviours and should be preserved

3 Likes

@kevinGodell,
Thanks for sharing your knowledge and experience!
I think I know enough - for now - to get started.
Will keep it simple and the node can always be extended later on to suppport edge cases.

If I am not mistaken, I was one of the people asking you at the time being to support pushing segments via socket.io. So I am partly responsible for the complexity of your node :no_mouth:
And playing video is complex stuff after all anyway...

@krambriw,
Thanks for confirming Kevin's statements above.

2 Likes

oh, the node does that by itself ? - And I did much afford with a workaround to check if the view page is still active and end ffmpeg streaming processes when leaving. anyways I could need a DB2 port too.

There are a series of issues to be solved. How more people jump in to help solving those, the faster it will go. And if I have to do it all on my own, it will take a long time. And then all my other stuff (like the svg node migration for D2) will stay on my long backlog. That is the way it is...

2 Likes

I could help with testing stuff but am not skilled enough to solve this video streaming things. The port to DB2 are still weeks of work for me too. Never coded a node by myself.