How to display CCTV camera in dashboard (RTSP)

Well, I have one global template that just includes the HLS.js reference from CDN:

<script src="https://cdn.jsdelivr.net/npm/hls.js@latest"></script>

And then just a template node that sets up the video player:

<div style="width: 500px; height: 281px; margin: 0 auto;">
    <video id="camera1Video" width="500" height="281" controls muted>
    </video>
    <script>
        (function(scope) {
            scope.$watch('msg.payload', function(data) {
                var timer = setInterval(function() {
                    if (!window.Hls) return;
                    clearInterval(timer);
                    
                    if (data !== undefined && data.endsWith(".m3u8")) {
                        var camera1Video = document.getElementById('camera1Video');
                        if(Hls.isSupported()) {
                            var hls = new Hls();
                            hls.loadSource(data);
                            hls.attachMedia(camera1Video);
                            hls.on(Hls.Events.MANIFEST_PARSED,function() {
                                camera1Video.play();
                            });
                         } else if (camera1Video.canPlayType('application/vnd.apple.mpegurl')) {
                            camera1Video.src = data;
                            camera1Video.addEventListener('canplay',function() {
                              camera1Video.play();
                            });
                        }
                    }
                }, 100)
            });
        })(scope);
    </script>
</div>

And I just send in a message at startup with a msg.payload of the URL to the m3u8. Thats it for node-red. Everything else is making sure I have an HLS stream somewhere.

I personally wouldn't run that in node-red... Ideally the camera itself would provide an HLS stream, but Ubiquiti has refused to see the wisdom in that despite it being one of the highest rated feature requests by users for like two years....

I prefer having a separate process doing that because I don't want node-red really effected at all by the huge overhead that video processing always entails. I mean I guess you could probably do it somehow, and more power to ya, but video restreaming seems like such a unique processing job... I tend to prefer a system made up of many small standalone modules instead of shoving everything into a single process though, so maybe I'm weird.

I DON'T think you can just restream from ffmpeg directly... if would get you the stream files, but you'd have to host and push them through an HTTP server of some sort, which can deal with the CORS and HTTP Header concerns still. But yeah, any standalone configurable restreaming processor would work I guess. Like I said, I run nginx + nginx-rtmp module in a docker image that handles that for me, and while it has its caveats, I haven't found anything even close to similar (have any ideas?). Honestly, if your NVR software will do the restream (BlueIris?) that'd be easier, but I'd think this whole thing would be moot if you found you could do that already, because the two nodes I have above would then handle it.

If you want to do actual image processing in node-red, which it sounds like you guys are kicking the wheels on, that's a different animal obviously... a cool project if you could essentially build a standalone NVR out of a dedicated node-red instance though!

2 Likes

Dear Bart,

I'm new with node-red and ip-camera.
It is an Onvif based camera. I installed the Onvif pallette in node-red.
Using Onvif discovery node gives me the xaddress in my debug window: http://192.xxx.xxx.xxx:8899/onvif/device_service. But when using this address in the Onvif media node it shows disconnected. I don't understand it. Has it something to do with permissions? Node-red runs on my Raspberry Pi 3. I open node-red in a browser from a Linux pc.
Thank you for your time reading.
Yvonne

Not sure if its appropriate to wake up this old thread, but I'm trying to use ffmpeg in an exec node to pipe rtsp stream images to my AI system via MQTT messages.

I'm only getting about 1/4 of the image via MQTT. They should be ~900K jpegs but I'm only getting about 265K via the MQTT message.

Are there an node-red imposed limits on sizes of what is returned via stdout from an exec node or on what size and MQTT output buffer can be?

The exec node is in "spawn" mode and the command is:
ffmpeg -f rtsp -i "rtsp://IPaddress/cam/realmonitor?channel=4&subtype=0" -f image2pipe pipe:1

This could explain why some earlier examples in this thread only worked with small images.

Hey @wb666greene,
I assume you have tested that the output messages (of the Exec node) also contain incomplete images? I mean is the buffer length at the output also 1/4 of what you expect. Just to avoid that it is somehow related to MQTT or some other middleware system ... And if the output messages are also incomplete, unfortunately I have no idea whether it is an os issue (like this) or an ffmpeg setting.

I'm trying to figure out how to modify the ffmpeg command to produces file01.jpg file02.jpg ... I've had other home owners issues interfere.

Here are two images form the same rtsp stream as outptu by my AI detection as close in time as I can find, you can see the top 1/4 of the ffmpeg image is fine, the bottom 3/4 is just old data from previous memory usage.

Cam2 is OpenCV reading the rtsp stream from the camera. Cam9 is node-red ffmpeg exec node pushing the image as anode-red MQTT output buffer.

Seems that somewhere a maximum size is being set that is too small for this application.

Initially I used Walter Krambriw's python sample code to push saved jpeg files as MQTT buffers to the same MQTT broker as a test and the images were fine, so I'm suspecting something in the node-red chain is limiting the output from the exec node or node-red MQTT output.

My plan it to dedicate a box to read all the rtsp camera streams and push them out as MQTT buffers to for my AI experiments.

I may have run into a similar problem when streaming data from a software-defined radio application. Have you checked whether the exec node is breaking frames into multiple messages?

I've had a lot of life interruptions these past few days so not much progress, but yes I'm trying to isolate the problem.

But if what you are suggesting is the situation, it would be the node-red exec node that would be setting the size limit, so hopefully a node-red expert could answer how to increase it.

au-contraire - it is your OS just sending buffers as dictated by the application you are running. If it calls flush at any time you will get a chunk, or when some internal buffer limit is hit. You may have some control for example if you use python then the -u options tells the app not to buffer.

So you are saying that its ffmpeg that is sending short the truncated images? This is the command I launch in an exec node with spawn mode

ffmpeg -f rtsp -i "rtsp://user:pw3@IPaddr:554/cam/realmonitor?channel=4&subtype=0" -f image2pipe pipe:1

The command:
ffmpeg -f rtsp -i "rtsp://user:pw3@IPaddr:554/cam/realmonitor?channel=4&subtype=0" -f image2 out_img-%03d.jpeg

produces complete images I'm no ffmpeg expert.

out_img-039
This is the last image produced after I pressed Ctrl-C to stop it. All the previous images are fine (~5/second) although a few have small regions of "mosaic breakups" as some warnings get thrown when the stream opens:
[h264 @ 0xc30e60] concealing 5790 DC, 5790 AC, 5790 MV errors in I frame
Past duration 0.999992 too large
Past duration 0.884987 too large
[NULL @ 0x774d20] RTP: missed 159 packets
Past duration 0.999992 too large
[h264 @ 0xb5a720] error while decoding MB 37 21, bytestream -17
[h264 @ 0xb5a720] concealing 5652 DC, 5652 AC, 5652 MV errors in I frame
Past duration 0.929985 too large
Past duration 0.954994 too largeN/A time=00:00:03.80 bitrate=N/A dup=0 drop=5

I've no idea what to make of these, but I see various warning when decoding rtsp streams with OpenCV too, but the images passed to the AI are "good enough" and all seems reliable.

Are you sure they are truncated? Is it not that they are split into sections, which if joined back together, as is done if you send them to a file, give the complete image?

Maybe, I'm not quite right about what the MQTT input is doing with the messages from the exec node via stdout. If they were broken up into mutiple pieces, wouldn't I get multiple MQTT messages?

If they are being broken up how would I put them together in node-red?

This is all derived/inspired from Bart's "Big Buck Bunny" example which works for small images, but seems to preak with images larger than 320x240 when I try.

What do you expect to get out of the ffmpeg command?

Hey @wb666greene,
I have found another RTSP test stream:

rtsp://b1.dnsdojo.com:1935/live/sys3.stream

Which has a higher resolution (1280x720) compared to the Big Buck Bunny stream:
image

It is the beach of Scheveningen in Holland. Not what you call baywatch beach ( :wink: ), but then at least we have stream that everybody in this discussion can try on their system (so we can compare) ...

When I add an image-output node at the output port of my exec node, then (after a few seconds) images will be previewed:

So no chunks here (in Ubuntu running on top of Windows 10 on my portable), but full images. And I have also added a debug node, which shows me that all buffers sizes are around 14K:

image

That 14K size is much smaller compared to 1280x720 (= 921600), but from the status of my image-info node you can see that ffmpeg gives me compressed images (jpg).

Perhaps you can do a similar test and compare the output data in your setup.

Thanks for this. This stream plays directly into the image preview, and if I send it as MQTT buffers the MQTT input plays fine as well

So I now have a 720P image stream that works, but from the "blockyness" it seems to be highly compressed. You wouldn't happen to know the frame rate, would you? When it gets daylight over there I'll feed these images to my AI just for grins :slight_smile:

If I resize my 1080 images to 720 its closer to working, still get a lot of mosaic breakups and the quality (detail in bushes and grass) is not very good.

My streams are 5 fps. The size in not a large as I mentioned initially, the sizes I gave earlier are what OpenCV writes on detections, Obviously I can compress these more, the frames FFMPEG writes to disk are ~150K Still 10X larger than your example stream.

I'm going to set this aside for now, and make a Python/OpenCV program to push the frames as MQTT buffers and see how that goes.

edit: If I and -s 1280x720 in front of the -f image2pipe part of the command the images that sort of work run run ~63K but I note a few that are 65K and are followed by a smaller fragment ~11-23K.

Making me think coming out of the exec node there is a chunk size limited to 65K (65536 bytes) How to increase it?

A jpeg image via stdout.

Aha, for that kind of stuff I have developed another node in the past :wink:
If I put a msg-speed node in between, you will see that I have an average of 25 frames (i.e. messages) per second:

image

Seems rather high, but think it is correct when I look at the statistics in VLC:

rtsp_statistics

I'm also not an ffmpeg expert ... And my time is up for today, and tomorrow I 'have' to go counting votes for the European selections (read: last time I got 19 euro for 11 hours counting). So no Node-RED tomorrow for me ...

Do you mean that each time you run the exec node you expect a single image? If so then if you put a debug node on the output of the exec do you see a single message each time you run it?

So keep digging.. and found this


(but that is trying to use a stdbuf command)
though there does seem to be something that node.js itself is doing ... there is mention of a 200k limit (albeit 4 years ago)...
to be continued

1 Like

Ah right it's coming back to me now (from previous loop round with Bart.) - so I think we got to the understanding that ffmpeg never actually closes the stream. So if we changed the node to put chunks back together it would never emit anything (as there was no end signal), so it is better to send them as we get given them, and reassemble them in a next step.

Dave,
If I remember correctly we only had that flushing issue with stdin: wanted to send an infinite stream of images into ffmpeg, but ffmpeg waited until the stream was closed. Here I have reported a workaround, but I still have 1 image delay...

But now with stdout, it seems to me (with the Scheveningen Beach stream) that the output arrives without delay. But of course there is such much data involved, that I cannot analyse delays ...

In this discussion I see that (for file output) you can specify to flush, and a blocksize... Perhaps ffmpeg has similar parameters for pipes.

@wb666greene would be nice if you could add a debug node at the output, and share a screenshot of the debug panel. Another question: rtsp can transport all kind of dat formats (jpg, png...). Can you see somehow what format your camera is delivering? Because I thought I read that image2pipe doesn't like all formats, and lot of users advice to specify ecplicit which format you want on the output. When the output are files (like in your good test), then ffmpeg can determine the output format based on the file extensions.

My time is up. I'm going to count votes for the belgian elections. Manual counting in 2019, unbelievable ...