How to display CCTV camera in dashboard (RTSP)

Following Link has lot of information about different video streaming related topics including webrtc:

https://www.linux-projects.org/demos/
https://www.linux-projects.org/uv4l/tutorials/

May be useful tosomeone.

I’m certainly not averse to the idea of a core dashboard node to support this. Ideally it could be a “media” widget and support both stills or video.

1 Like

Absolutely, the main potential issue I see is that generally the AI (or other downstream processing) can't keep up with the full rtsp frame rate unless only a single stream is being processed or its a low rate stream, so many frames need to be "dropped". OpenCV seems to do this internally if the read() method is not called often enough. My reading threads drop the frame if the queue that feeds the AI threads is full and after a short delay (time.sleep(0.001)) to force a context switch, resumes.

You can't easily use rbe on binary buffers or you quickly run out of memory -- the issue that got me on this forum initially that only came to light recently (Memory leak, what am I doing wrong?)

OpenCV also has settings to adjust the size of its internal queue, but it seems version dependent on if its honored or not.

Maybe I missed something when I initially looked at uv4l and webRTC, but they seemed to be for streaming video, i.e. sourcing a stream of compressed frames, whereas the issue I have is reading streams to get uncompressed frames to feed other processing or display. I didn't dive into it very deeply because of this.

I do recall using what appears to be a very early version or predecessor to uv4l to implement a "loopback" v4l2 device to display images from Motion software for camera and motion detection region setup.

If I'm incorrect, and it offers a way to pull frames from rtsp and/or other streams it'd definitely be worth another look.

This could be a help for displaying images in the dashboard, but my understanding is generally images meant for display on the dashboard are not streams but individual images in sequence -- like produced by the ffmpeg command to "decode" an rtsp stream.

May be links below would be helpful:


@wb666greene: Yes that is indeed when the websocket is loaded to high. Dashboard won't be responsive anymore. That is why I wanted (in my node-red-contrib-ui-camera node) to have two modes:

  • Push the messages to the dashboard, since that is fairly easy for users (just send a message with an image to the node's input). But I will warn them that this is not advised for lots of data.
  • Pull the messages from the node-red flow (by using a 'src' attribute). A bit more work to setup, but much better performance.

@happytm. WebRtc is indeed also an option. But perhaps somebody can later on make a dedicated UI node for webrtc. And as @wb666greene responded to your post, I also 'think' we cannot use it here for decoding a stream into individual images (like rtsp).

@dceejay: in that case you are aiming at the node-red-contrib-ui-media? In the first version of my node-red-contrib-ui-camera node, I also had stuff provided to show images and video files. But I removed it from my node, since camera content has no finite length (like an image or video file). And I want to have PTZ controls, zones (with polygons), ... Not sure if it is wise to put all together in a single node, just because they both show an image (sequence)? To me camera images belong in a dedicated node...

The word "absolutely" sounds as music in my ears. This means that all experiments above have at least resulted in something usable. :tada::balloon:

But indeed like you say: by having solved the rtsp stream decoding, we will end up with a massive amount of images in Node-RED. New developments will be required to be able to handle all this data. But one step at a time ... We have already done a lot of experiments with OpenCv matrices in Node-RED, but we had lots of issues when messages (containing references to OpenCv matrices) are cloned by Node-RED, and when NodeJS starts its garbage collection of those messages. But I have no time at the moment to digg into that again ...

Something else for the mix https://www.npmjs.com/package/beamcoder

There's a lot here, but as the original issue was viewing RTSP in node-red-dashboard, have you considered just restreaming the RTSP stream to something that you can decode natively in the browser like HLS? Then you can just put a video player in a dashboard template widget that just loads the HLS stream directly. It also has the added benefit of being able to be casted to Chromecast in this format if you wanted to view your cam stream on a TV (why I chose HLS).

I just run a docker image running nginx-rtmp. The setup and config is a little involved but what isn't when it comes to video streaming. If you need RTSP specifically, I don't remember for sure if nginx-rtmp will directly decode that, but you can pass it through ffmpeg and feed that directly into the module to feed the HLS stream I'm pretty sure, as I've seen a few projects doing that.

I just use HLS.js in a template widget to display my cameras feeds after that, and I never really need to touch the restream. Its running on a NAS unit right now, and doesn't seem to really use a lot of CPU / memory at all to do the restream. The only downside is it loses the stream and the docker image needs restarted once in a while... but I just have a little monitoring script that tries to access the stream once a minute and restarts it if it needs to be. That has made it rock solid for more than a year now.

Hi Michael (@i8beef),
Thanks for the information! I have considered that, but didn't have time yet to investigate it ... I suppose we could do something like that in Node-RED in two different ways:

  • Use a single Exec-node for Ffmpeg that restreams the rtsp directly into hls, which results probably in better performance.
  • Use two separate Exec-nodes for Ffmpeg: one that decodes the rtsp stream into images, and another that creates an hls stream from those images. Performance will be worse, but now you can do all kind of processing on the images (e.g. face recognition, license plate recognition, ...).

Don't hesitate to share your flow with us!
I had a "very" quick look at hls.js, but from their readme page it isn't clear to me at first sight how it works:

It works by transmuxing MPEG-2 Transport Stream and AAC/MP3 streams into ISO BMFF (MP4) fragments. This transmuxing could be performed asynchronously using Web Worker if available in the browser.

Does this mean that the Node-RED flow needs to send mp4 fragments to the template node in the dashboard??

Well, I have one global template that just includes the HLS.js reference from CDN:

<script src="https://cdn.jsdelivr.net/npm/hls.js@latest"></script>

And then just a template node that sets up the video player:

<div style="width: 500px; height: 281px; margin: 0 auto;">
    <video id="camera1Video" width="500" height="281" controls muted>
    </video>
    <script>
        (function(scope) {
            scope.$watch('msg.payload', function(data) {
                var timer = setInterval(function() {
                    if (!window.Hls) return;
                    clearInterval(timer);
                    
                    if (data !== undefined && data.endsWith(".m3u8")) {
                        var camera1Video = document.getElementById('camera1Video');
                        if(Hls.isSupported()) {
                            var hls = new Hls();
                            hls.loadSource(data);
                            hls.attachMedia(camera1Video);
                            hls.on(Hls.Events.MANIFEST_PARSED,function() {
                                camera1Video.play();
                            });
                         } else if (camera1Video.canPlayType('application/vnd.apple.mpegurl')) {
                            camera1Video.src = data;
                            camera1Video.addEventListener('canplay',function() {
                              camera1Video.play();
                            });
                        }
                    }
                }, 100)
            });
        })(scope);
    </script>
</div>

And I just send in a message at startup with a msg.payload of the URL to the m3u8. Thats it for node-red. Everything else is making sure I have an HLS stream somewhere.

I personally wouldn't run that in node-red... Ideally the camera itself would provide an HLS stream, but Ubiquiti has refused to see the wisdom in that despite it being one of the highest rated feature requests by users for like two years....

I prefer having a separate process doing that because I don't want node-red really effected at all by the huge overhead that video processing always entails. I mean I guess you could probably do it somehow, and more power to ya, but video restreaming seems like such a unique processing job... I tend to prefer a system made up of many small standalone modules instead of shoving everything into a single process though, so maybe I'm weird.

I DON'T think you can just restream from ffmpeg directly... if would get you the stream files, but you'd have to host and push them through an HTTP server of some sort, which can deal with the CORS and HTTP Header concerns still. But yeah, any standalone configurable restreaming processor would work I guess. Like I said, I run nginx + nginx-rtmp module in a docker image that handles that for me, and while it has its caveats, I haven't found anything even close to similar (have any ideas?). Honestly, if your NVR software will do the restream (BlueIris?) that'd be easier, but I'd think this whole thing would be moot if you found you could do that already, because the two nodes I have above would then handle it.

If you want to do actual image processing in node-red, which it sounds like you guys are kicking the wheels on, that's a different animal obviously... a cool project if you could essentially build a standalone NVR out of a dedicated node-red instance though!

2 Likes

Dear Bart,

I'm new with node-red and ip-camera.
It is an Onvif based camera. I installed the Onvif pallette in node-red.
Using Onvif discovery node gives me the xaddress in my debug window: http://192.xxx.xxx.xxx:8899/onvif/device_service. But when using this address in the Onvif media node it shows disconnected. I don't understand it. Has it something to do with permissions? Node-red runs on my Raspberry Pi 3. I open node-red in a browser from a Linux pc.
Thank you for your time reading.
Yvonne

Not sure if its appropriate to wake up this old thread, but I'm trying to use ffmpeg in an exec node to pipe rtsp stream images to my AI system via MQTT messages.

I'm only getting about 1/4 of the image via MQTT. They should be ~900K jpegs but I'm only getting about 265K via the MQTT message.

Are there an node-red imposed limits on sizes of what is returned via stdout from an exec node or on what size and MQTT output buffer can be?

The exec node is in "spawn" mode and the command is:
ffmpeg -f rtsp -i "rtsp://IPaddress/cam/realmonitor?channel=4&subtype=0" -f image2pipe pipe:1

This could explain why some earlier examples in this thread only worked with small images.

Hey @wb666greene,
I assume you have tested that the output messages (of the Exec node) also contain incomplete images? I mean is the buffer length at the output also 1/4 of what you expect. Just to avoid that it is somehow related to MQTT or some other middleware system ... And if the output messages are also incomplete, unfortunately I have no idea whether it is an os issue (like this) or an ffmpeg setting.

I'm trying to figure out how to modify the ffmpeg command to produces file01.jpg file02.jpg ... I've had other home owners issues interfere.

Here are two images form the same rtsp stream as outptu by my AI detection as close in time as I can find, you can see the top 1/4 of the ffmpeg image is fine, the bottom 3/4 is just old data from previous memory usage.

Cam2 is OpenCV reading the rtsp stream from the camera. Cam9 is node-red ffmpeg exec node pushing the image as anode-red MQTT output buffer.

Seems that somewhere a maximum size is being set that is too small for this application.

Initially I used Walter Krambriw's python sample code to push saved jpeg files as MQTT buffers to the same MQTT broker as a test and the images were fine, so I'm suspecting something in the node-red chain is limiting the output from the exec node or node-red MQTT output.

My plan it to dedicate a box to read all the rtsp camera streams and push them out as MQTT buffers to for my AI experiments.

I may have run into a similar problem when streaming data from a software-defined radio application. Have you checked whether the exec node is breaking frames into multiple messages?

I've had a lot of life interruptions these past few days so not much progress, but yes I'm trying to isolate the problem.

But if what you are suggesting is the situation, it would be the node-red exec node that would be setting the size limit, so hopefully a node-red expert could answer how to increase it.

au-contraire - it is your OS just sending buffers as dictated by the application you are running. If it calls flush at any time you will get a chunk, or when some internal buffer limit is hit. You may have some control for example if you use python then the -u options tells the app not to buffer.

So you are saying that its ffmpeg that is sending short the truncated images? This is the command I launch in an exec node with spawn mode

ffmpeg -f rtsp -i "rtsp://user:pw3@IPaddr:554/cam/realmonitor?channel=4&subtype=0" -f image2pipe pipe:1

The command:
ffmpeg -f rtsp -i "rtsp://user:pw3@IPaddr:554/cam/realmonitor?channel=4&subtype=0" -f image2 out_img-%03d.jpeg

produces complete images I'm no ffmpeg expert.

out_img-039
This is the last image produced after I pressed Ctrl-C to stop it. All the previous images are fine (~5/second) although a few have small regions of "mosaic breakups" as some warnings get thrown when the stream opens:
[h264 @ 0xc30e60] concealing 5790 DC, 5790 AC, 5790 MV errors in I frame
Past duration 0.999992 too large
Past duration 0.884987 too large
[NULL @ 0x774d20] RTP: missed 159 packets
Past duration 0.999992 too large
[h264 @ 0xb5a720] error while decoding MB 37 21, bytestream -17
[h264 @ 0xb5a720] concealing 5652 DC, 5652 AC, 5652 MV errors in I frame
Past duration 0.929985 too large
Past duration 0.954994 too largeN/A time=00:00:03.80 bitrate=N/A dup=0 drop=5

I've no idea what to make of these, but I see various warning when decoding rtsp streams with OpenCV too, but the images passed to the AI are "good enough" and all seems reliable.

Are you sure they are truncated? Is it not that they are split into sections, which if joined back together, as is done if you send them to a file, give the complete image?

Maybe, I'm not quite right about what the MQTT input is doing with the messages from the exec node via stdout. If they were broken up into mutiple pieces, wouldn't I get multiple MQTT messages?

If they are being broken up how would I put them together in node-red?

This is all derived/inspired from Bart's "Big Buck Bunny" example which works for small images, but seems to preak with images larger than 320x240 when I try.