How to display CCTV camera in dashboard (RTSP)

The speed looks impressive ...

Its even more so when you see it in person with all the other activity on the system. Its a second from the top of the line i7 computer purchased when I retired in July 2013, maxed out with 64GB RAM along with the Lorex DVR system.

I naively expected to get "motion images" from the DVR, "AND" them with PIR motion sensor outputs to reduce the false alarm rate. Not even close, so I started looking into openCV face-detection stuff Harr Cascades, HOG, etc. all failed badly on security camera images and found false faces in the shadows and bushes.

Then I discovered AI, Darknet YOLO was great but took 18 seconds to process a frame on this i7 (I don't have CUDA installed which it highly recommends). Eventually this led me to the Movidius NCS and Moblenet-SSD AI whcih has worked very well.

@wb666greene: I'm very interested to verify whether the my above flow (based on FFmpeg) is a decent solution for RTSP streaming or why not. It is not really clear to me why you say it is "a long way from being a general solution"? Do you think it cannot handle your 15 1920x1080 rtsp streams at a similar speed (on the same hardware)? Because if OpenCv uses the same FFmpeg library under the hood, I would expect it to run at a similar speed. Or are there other 'specific' reasons why you don't like the above flow?

I'm a bit confused here, if you are talking about the flow in your image I quoted (the one using the big buck bunny public link) with the ffmpeg command in an exec node, it worked fine but that is a very small image, It seemed to work with my Lorex rtsp stream, although the HD image choked the image-output node.

My statement about not being a general solution is based on the errors/warnings thrown by the various rtsp stream reading methods.

I've come to the conclusion that this is an interaction with the stream sources (cameras, dvrs etc) and the stream URL reader. It seems if you can view the stream for a few minutes in some version of VLC, the straight from China camera makers deem that "it works".

I will try two of my Onvif rtsp URLs in this flow, I'll put a debug node on the exec stderr and see what happens and follow-up. Both are HD (720p, 1080p) so I know the image output will "choke". I can try duplicating it and running both streams at once.

Why do you think I don't like the flow? Its nice to have options, I'm here more for getting a good display from node red in a web page (dashboard or whatever), learning about other input possibilities is welcome for potential future projects, but using openCV Python bindings are pretty compelling for me at the moment.

If node red is running a single threaded event loop I'd expect problems with multiple streams, but if each exec node spawns a separate thread/process performace could be similar. Until we find a node-red image viewer with the performance similar to openCV highgui module it might be impossible to verify.

I didn't know about your big buck bunny public stream -- its great for these discussions/experiments, I had no luck with Google, but do you know of a public URL with higher resolution -- at least D1 (704x480)?

Since the AI input is resized ~300x300 (depending on the model) I've found D1 or 720p to be about the optimum, the 1080p images miss people walking by in the upper third of the frame (not really a problem for my purposes), but the 1080p images also offer an option to crop the image which can keep me off ladders to adjust the cameras (why PTZ would rock if the cameras weren't 4X+ more expensive that what I have now, PT is of course acceptable for fixed lens cameras)

Seems I didn't interpreted your explanation correctly. Don't forget I'm not native speaking English ...

That would be very nice, to get a better idea whether it is useful in practice!!!

Don't think we will ever be able to do something similar. Creating a browser viewer that is as fast as a C++ gui. But you never know ...

Unfortunately I didn't find anything. At their higher resolutions page, I don't see any rtsp stream ... That would have indeed have been nice, to compare different setups (opencv/ffmpeg/motion/...). But my Raspberry pi model 3 would certainly start melting with your experiments :wink:

1 Like

I'm now playing a bit with your ffmpeg exec flow with multiple streams. Main problem is my Lorex DVR sub-streams are CIF (352x240) only a bit bigger than your Big Buck Bunny public rtsp stream. the HD streams "choke" the Image Output node resulting in displaying mostly garbage.

I took a cell phone video of my i7 Desktop running 9 HD rtsp streams into my Python AI and my node-red "controller" dashboard that displays a single selected channel for testing and/or camera positioning.

Quality is not great but watching the time stamped file names update in the node-red UI and the pool cleaner moving in third 3rd window from the top, adjacent to the browser window, gives a good idea of how it runs. The AI was processing an aggregate of ~28.3 fps using 1 Movidius NCS thread + 1 CPU AI Thread, + 9 threads to process each rtsp stream, and the main thread to draw the displays and send the MQTT messages to Node-Red.

Python AI and Node-red Dashboard Controller in action.

I need to get one of those "tripod adapters" for my phone -- results, while quick and dirty but useful, would be a lot better without the camera movement and resulting focus jitter.

I opened a "ticket" with Lorex tech support asking why the D1 sub-stream settings revert to CID shortly after applying. They've never bothered to reply before to any of my questions so I don't expect any solution.

D1 video is perfectly fine for the AI, actually I expect CIF would work too since the AI is 300x300 pixels, but a CIF image is inadequate for viewing in Email to make a decision about the AI notification (UPS guy or Criminal at my door) -- that's where the 720p and 1080p really have the advantage.

I'll try to setup 8 of my CIF streams and your BBB stream and post another short video clip.

This morning I setup a fan-less 12V powered dual cor i7 "NUC-like" computer, the AI code on it with 9 HD rtsp threads and the same 1 NCS + 1 CPU thread is giving ~18.4 fps. I've also got an i5 version of it I plan to try soon. The i3 version of it gave ~12.6 fps with 6 HD rtsp streams -- my feeling is fps/Ncameras ~ 2 is about ideal give or take a few cameras :slight_smile:

Next priority is moving the code to OpenVINO so I can compare NCS and NCS2 sticks.

1 Like

Hi @nygma2004, thanks for linking me to your video. Was really good. I tried to run the above in the exec node, and I could see the image served up by Node-RED. However unlike yours, my exec node didn't finish running. Also when I triggered it again, it fails. When I delete the file manually, it re-creates it with a new image. Any ideas?

This reminded me I never did post a short video clip of the 8 Lorex DVR CIF rtsp streams playing along with the "public" BigBuckBunny rtsp stream.
9 node-red rtsp streams

Sorry its night,t but you can see from the passing headlights that it plays well. I think my CIF rtsp streams from the Lorex DVR are 5FPS

Morning @wb666greene,
thanks for the nice example!

I was still thinking you had choking images in the output node, which seemed natural to me since all data has to travel through that single poor websocket channel ...
But in your youtube video it all 'seems' to be running smooth. Did you do anything special to solve it?

And second question: can I conclude from your test that rtsp via ffmpeg in Node-RED is 'usable' ??

Very nice & smooth looking!
Just thought, wouldn't it be great to have a dashboard node that could do the same as the "Image output" node does? But as a dashboard component. Only missing thing to make it really simple to create a nice dashboard with multiple video streams

Hey Walter (@krambriw), I'm developing something at the moment. But have some layout issues that I need to figure out...

No I just cloned your BigBuckBunny flow and changed the URL to my Lorex DVR sub-streams. The main "trick" is to have enough CPU as each exec/spawn creates a new process (aka heavyweight thread). What "fixes" the ImageOutput is most likely that the sub-streams are CIF (352x240) in size, barely larger that the BigBuckBunny stream.

I certainly agree with Walter (@krambriw) that having something like ImageOutput displaying on the dashboard would be useful.

I'm using this (derived from other samples/examples on this forum):

[{"id":"d5d6c456.ea1f48","type":"template","z":"17a1aaab.ebd6c5","name":"","field":"payload","fieldType":"msg","format":"handlebars","syntax":"mustache","template":"<img width=\"640px\" height=\"360px\" src=\"data:image/jpg;base64,{{{payload}}}\">","output":"str","x":865,"y":760,"wires":[["dcaef4d7.2e7b98"]]},{"id":"dcaef4d7.2e7b98","type":"ui_template","z":"17a1aaab.ebd6c5","group":"655d6147.d4f78","name":"Viewer","order":2,"width":"13","height":"7","format":"<div ng-bind-html=\"msg.payload\"></div>","storeOutMessages":false,"fwdInMessages":true,"templateScope":"local","x":1005,"y":760,"wires":[["329b55df.ab1b4a"]]},{"id":"d1fd275f.885ab8","type":"base64","z":"17a1aaab.ebd6c5","name":"","action":"str","property":"payload","x":725,"y":760,"wires":[["d5d6c456.ea1f48"]]},{"id":"52ff5600.979d7c","type":"change","z":"17a1aaab.ebd6c5","name":"","rules":[{"t":"move","p":"payload","pt":"msg","to":"msg.filename","tot":"msg"}],"action":"","property":"","from":"","to":"","reg":false,"x":425,"y":760,"wires":[["2bee7152.ac964e","38973a95.ddd966"]]},{"id":"2bee7152.ac964e","type":"file in","z":"17a1aaab.ebd6c5","name":"","filename":"","format":"","chunk":false,"sendError":false,"x":595,"y":760,"wires":[["d1fd275f.885ab8"]]},{"id":"f61b911e.1ef6d","type":"mqtt in","z":"17a1aaab.ebd6c5","name":"Idle Image","topic":"IdleImage","qos":"0","broker":"70fa8cd9.0a1224","x":100,"y":760,"wires":[["89cc3f0c.e7e75","52ff5600.979d7c"]]},{"id":"329b55df.ab1b4a","type":"file","z":"17a1aaab.ebd6c5","name":"","filename":"","appendNewline":true,"createDir":false,"overwriteFile":"delete","x":1135,"y":760,"wires":[[]]},{"id":"89cc3f0c.e7e75","type":"ui_button","z":"17a1aaab.ebd6c5","name":"filename","group":"655d6147.d4f78","order":1,"width":"12","height":"1","passthru":true,"label":"{{msg.payload}}","tooltip":"","color":"#101010","bgcolor":"#83ed7b","icon":"","payload":"","payloadType":"str","topic":"","x":260,"y":815,"wires":[[]]},{"id":"38973a95.ddd966","type":"debug","z":"17a1aaab.ebd6c5","name":"","active":false,"tosidebar":true,"console":false,"tostatus":false,"complete":"false","x":605,"y":700,"wires":[]},{"id":"a55ff06e.293d","type":"ui_dropdown","z":"17a1aaab.ebd6c5","name":"Set UI View","label":"","tooltip":"Enable/Disable Camera Live View","place":"Camera Viewing","group":"b63aa4a5.08af18","order":1,"width":0,"height":0,"passthru":true,"options":[{"label":"Enable","value":"1","type":"str"},{"label":"Disable ","value":"0","type":"str"}],"payload":"","topic":"Alarm/UImode","x":110,"y":300,"wires":[["52fbe3fe.50182c"]]},{"id":"52fbe3fe.50182c","type":"mqtt out","z":"17a1aaab.ebd6c5","name":"Set UI Mode","topic":"Alarm/UImode","qos":"2","retain":"true","broker":"70fa8cd9.0a1224","x":290,"y":300,"wires":[]},{"id":"416ad6ce.a927f8","type":"ui_dropdown","z":"17a1aaab.ebd6c5","name":"Select Camera","label":"","tooltip":"Enable/Disable Camera Live View","place":"Select Camera","group":"b63aa4a5.08af18","order":1,"width":0,"height":0,"passthru":true,"options":[{"label":"Camera 0","value":"0","type":"str"},{"label":"Camera 1","value":"1","type":"str"},{"label":"Camera 2","value":"2","type":"str"},{"label":"Camera 3","value":"3","type":"str"},{"label":"Camera 4","value":"4","type":"str"},{"label":"Camera 5","value":"5","type":"str"},{"label":"Camera 6","value":"6","type":"str"},{"label":"Camera7","value":"7","type":"str"}],"payload":"","topic":"ViewCamera","x":120,"y":375,"wires":[["9f8dc948.7ab3f8"]]},{"id":"9f8dc948.7ab3f8","type":"mqtt out","z":"17a1aaab.ebd6c5","name":"Set Camera","topic":"Alarm/ViewCamera","qos":"2","retain":"true","broker":"70fa8cd9.0a1224","x":290,"y":375,"wires":[]},{"id":"655d6147.d4f78","type":"ui_group","z":"","name":"Camera Viewer","tab":"6e5fd6b6.518e98","order":2,"disp":true,"width":"13","collapse":false},{"id":"70fa8cd9.0a1224","type":"mqtt-broker","z":null,"name":"localhost:1883","broker":"localhost","port":"1883","clientid":"","usetls":false,"compatmode":true,"keepalive":"60","cleansession":true,"birthTopic":"","birthQos":"0","birthRetain":"false","birthPayload":"","closeTopic":"","closePayload":"","willTopic":"","willQos":"0","willRetain":"false","willPayload":""},{"id":"b63aa4a5.08af18","type":"ui_group","z":"","name":" AI Mode","tab":"6e5fd6b6.518e98","order":1,"disp":true,"width":"4","collapse":false},{"id":"6e5fd6b6.518e98","type":"ui_tab","z":"","name":"AI Controller","icon":"dashboard","disabled":false,"hidden":false}]

to select one of 8 cameras to be displayed on the dashboard. It works well for limited usage, but if left displaying for long enough the connection drops or the entire webpage becomes very sluggish. Initially I made a version of it that shows all 8-cameras (4x2), but this chokes very quickly so I made it display only one selected camera.

I'm using files instead of buffers as my initial design runs the node-red on the AI host and a localhost MQTT broker with MQTT from another broker to set the Idle/audio/notify modes of operation.

Once again Walter (@krambriw) has given me the key bit of information so I can pass buffers via MQTT from my Python AI instead of just filenames allowing a more "distributed" system. I'm currently working on doing this.

Following Link has lot of information about different video streaming related topics including webrtc:

https://www.linux-projects.org/demos/
https://www.linux-projects.org/uv4l/tutorials/

May be useful tosomeone.

I’m certainly not averse to the idea of a core dashboard node to support this. Ideally it could be a “media” widget and support both stills or video.

1 Like

Absolutely, the main potential issue I see is that generally the AI (or other downstream processing) can't keep up with the full rtsp frame rate unless only a single stream is being processed or its a low rate stream, so many frames need to be "dropped". OpenCV seems to do this internally if the read() method is not called often enough. My reading threads drop the frame if the queue that feeds the AI threads is full and after a short delay (time.sleep(0.001)) to force a context switch, resumes.

You can't easily use rbe on binary buffers or you quickly run out of memory -- the issue that got me on this forum initially that only came to light recently (Memory leak, what am I doing wrong?)

OpenCV also has settings to adjust the size of its internal queue, but it seems version dependent on if its honored or not.

Maybe I missed something when I initially looked at uv4l and webRTC, but they seemed to be for streaming video, i.e. sourcing a stream of compressed frames, whereas the issue I have is reading streams to get uncompressed frames to feed other processing or display. I didn't dive into it very deeply because of this.

I do recall using what appears to be a very early version or predecessor to uv4l to implement a "loopback" v4l2 device to display images from Motion software for camera and motion detection region setup.

If I'm incorrect, and it offers a way to pull frames from rtsp and/or other streams it'd definitely be worth another look.

This could be a help for displaying images in the dashboard, but my understanding is generally images meant for display on the dashboard are not streams but individual images in sequence -- like produced by the ffmpeg command to "decode" an rtsp stream.

May be links below would be helpful:


@wb666greene: Yes that is indeed when the websocket is loaded to high. Dashboard won't be responsive anymore. That is why I wanted (in my node-red-contrib-ui-camera node) to have two modes:

  • Push the messages to the dashboard, since that is fairly easy for users (just send a message with an image to the node's input). But I will warn them that this is not advised for lots of data.
  • Pull the messages from the node-red flow (by using a 'src' attribute). A bit more work to setup, but much better performance.

@happytm. WebRtc is indeed also an option. But perhaps somebody can later on make a dedicated UI node for webrtc. And as @wb666greene responded to your post, I also 'think' we cannot use it here for decoding a stream into individual images (like rtsp).

@dceejay: in that case you are aiming at the node-red-contrib-ui-media? In the first version of my node-red-contrib-ui-camera node, I also had stuff provided to show images and video files. But I removed it from my node, since camera content has no finite length (like an image or video file). And I want to have PTZ controls, zones (with polygons), ... Not sure if it is wise to put all together in a single node, just because they both show an image (sequence)? To me camera images belong in a dedicated node...

The word "absolutely" sounds as music in my ears. This means that all experiments above have at least resulted in something usable. :tada::balloon:

But indeed like you say: by having solved the rtsp stream decoding, we will end up with a massive amount of images in Node-RED. New developments will be required to be able to handle all this data. But one step at a time ... We have already done a lot of experiments with OpenCv matrices in Node-RED, but we had lots of issues when messages (containing references to OpenCv matrices) are cloned by Node-RED, and when NodeJS starts its garbage collection of those messages. But I have no time at the moment to digg into that again ...

Something else for the mix https://www.npmjs.com/package/beamcoder

There's a lot here, but as the original issue was viewing RTSP in node-red-dashboard, have you considered just restreaming the RTSP stream to something that you can decode natively in the browser like HLS? Then you can just put a video player in a dashboard template widget that just loads the HLS stream directly. It also has the added benefit of being able to be casted to Chromecast in this format if you wanted to view your cam stream on a TV (why I chose HLS).

I just run a docker image running nginx-rtmp. The setup and config is a little involved but what isn't when it comes to video streaming. If you need RTSP specifically, I don't remember for sure if nginx-rtmp will directly decode that, but you can pass it through ffmpeg and feed that directly into the module to feed the HLS stream I'm pretty sure, as I've seen a few projects doing that.

I just use HLS.js in a template widget to display my cameras feeds after that, and I never really need to touch the restream. Its running on a NAS unit right now, and doesn't seem to really use a lot of CPU / memory at all to do the restream. The only downside is it loses the stream and the docker image needs restarted once in a while... but I just have a little monitoring script that tries to access the stream once a minute and restarts it if it needs to be. That has made it rock solid for more than a year now.