How to display CCTV camera in dashboard (RTSP)

So far I've been able to use VLC on my Windows laptop to decode and stream an RTSP stream directly from my HikVision cameras. The command to run VLC to do this is

vlc.exe -R rtsp://admin:password@10.4.5.24:554/Streaming/Channels/101/ --sout "#transcode{vcodec=mjpg,vb=2500,scale=1.0,fps=10,acodec=none}:standard{access=http{mime=multipart/x-mixed-replace; boundary=7b3cc56e5f51db803f790dad720ed50a},mux=mpjpeg,dst=:8888/videostream.cgi}

This generates an MJPG which can be displayed with a simple Dashboard template node:

<img src="http://10.4.5.184:8888/videostream.cgi" />

I'd like to remove the dependency on my laptop - or any external software - and have Node-RED do the conversion between RTSP and MJPG. Basically I'd like Node-RED to display the RTSP directly.

I thought this might be a common request, but apparently not :confused:

I've searched high and low, and my Google searches keep bringing up node-red-contrib-multipart-stream-decoder. I've read that page quite a few times but it doesn't seem to do what I need.

Can this be done only in Node-RED, or will I need some other solution to do this conversion?

If there is nothing interesting in Node-RED library (which I believe is the case) then a possible path is to find a solution based on ffmpeg. If you manage to make your solution work using ffmpeg (instead of VLC) then you may test one of the existing ffmpeg wrappers for Node.JS.

1 Like

I'm still going up the learning curve here, but VLC seems to be the only supported rtsp on Linux by the security camera/system vendors.

Some versions of openCV can read rtsp streams:

For example, on my Ubuntu 16.04 with openCV 3.3.0 and python3.5

This simple code will grab frames from two channels of my Lorex security DVR and display them:

'
import cv2

cap = cv2.VideoCapture("rtsp://admin:passwd@192.168.2.xxx:554/cam/realmonitor?channel=10&subtype=0")
cap1 = cv2.VideoCapture("rtsp://admin:passwd@192.168.2.xxx:554/cam/realmonitor?channel=9&subtype=0")
while(1):
ret, frame = cap.read()
ret1, frame1 = cap1.read()
cv2.imshow('RTSP Video',cv2.resize(frame, (480, 270)))
cv2.imshow('RTSP 1',cv2.resize(frame1, (480, 270)))
cv2.waitKey(1)
'

My cams are 1920x1080 so I resized them to something more sutiable for a cellphone display.

I doubt nodejs can do it since Chrome browser requires a "VLC" plugin to display rtsp streams as far as I can tell.

If I'm wrong here, I'm looking for solutions too.

I can only share you my experience on this and I have used one of the latest versions of Motion (also uses ffmpeg like VLC and many others). This software has what you need (and much more) and will be able to take the RTSP stream from your ip-cameras and provide you with a HTTP stream that easily can be viewed in the dashboard. I have done this myself

The question is how much work are you willing to put into your project? Using Motion as the video engine is by far the simplest I think.

For sure you will still be dependent on external software - but what software is not? - node-red depends on node.js and node.js depends on...etc etc

Check out Motion here:
https://motion-project.github.io/index.html
https://motion-project.github.io/motion_build.html

This example is my dashboard, currently showing our cameras around the house, we just got a white christmas it seems

1 Like

Since browsers do not currently support rtsp directly and the plugins seem unreliable or unsupported I don't think you have much choice other than to use something like VLC, ffmpeg or Motion to restream it in a format that browsers can show. this does not need to be in your laptop of course, it could be any other machine on your network.

Thanks for that. I'm happy to put the work in myself! The reason I want to reduce dependencies is that yes, NR depends on other software, but I need to future proof things as much as possible, and make it potentially rebuildable by others.

Can you provide more detail about exactly what you did to get that working?

Thanks Colin - sometimes it's just useful for someone to confirm things like that! After all it saves me further hours of head-scratching! I'll probably opt for running vlan in a scheduled task on my Win virtual machine. It's naf, but if it does the job...

If you decide using Motion as the video engine, I can guide further if you get stuck but you have to give more detailed Q's of the problems you encounter in that case. I think the links I gave you should be enough as starter to install & configure Motion properly. From there the next step would be to get the streams into the dashboard, that is the easy part

If you decide for anything else, I might not be able to guide but there might be other users jumping in

Further more (advanced) discussions around the topic & to add AI is really interestingly discussed here:

I'm interested in some of the details of how you are doing this, I have a commercial (Lorex) 16 channel "security DVR", its documentation is terrible, I've recently after a lot of Googling and trial and error discovered I can get rtsp streams (look good in VLC) or jpeg snapshots from "magic" Urls. Their tech support never bothered to answer my inquiries.

I'm currently feeding my AI system with "snapshot" jpegs from the system via its "ftp motion detection" feature, the main problem is the latency, it can take as long a 4-6 seconds for a snapshot to come in after a "motion trigger" event. I don't need high frame rate, but using the snapshot Urls it looks like snapshots are only updated about once every two seconds, so while I can get a snapshot quickly it could be two seconds old.

Now I'm playing around with using the rtsp Urls and the openCV code I'd posted as a starting point. I've used motion/motioneye in the past. My concern is 16 rtsp streams might be a bit heavyweight for grabbing one frame about ever 0.7 seconds from each stream. Obviously I'm interested if motion could be a nice time saving shortcut for me.

I configured an ip-camera with rtsp in Motion, worked fine.

netcam_url = rtsp://192.168.10.108
netcam_highres = (not defined)
netcam_userpass = admin:1234

Regarding latency, I did realize that the camera configuration had a huge impact. AT first, I thought I would like to have a massive amount of frames just to get early warning of an event. The effect was the opposite. The higher frame rate you configure in the camera, the latency will grove (since all the frames are buffered and therefore it will take a while until you will see the actual event). The same if you define a high resolution, this will increase the latency -> more data to push through the network before you see the interesting parts

Hi Mat,

Indeed like you have seen, my multipart decoder is designed to convert an MJPEG stream to separate JPEG images. However that has nothing to do with rtsp streams, since that is a completely other protocol.

Like Walter proposes, using external software (e.g. Motion) will be an easy and powerful solution. However I really like to have my stuff a bit more integrated inside Node-RED, but that is just my personal opinion...

The NodeJs RTSP (Javascript) libraries are rather limited, so FFmpeg is minimal required to get the job done.
I did a quick test this evening with a custom RTSP decoding node, based on FFMPEG:

rtsp

Seems to be working fine. I think such a node might be useful for other users, so I will try to find some spare time to make it more user-friendly ... As soon as I have something ready, I will discuss it on this forum (before publishing it on NPM).

Bart

6 Likes

Hi Bart

That would be superb. I'm going to be really strict with myself about trying not to make external customisations to my automation setup, because the more of this you do, the less permanent it becomes.

So thank you - if there's anything you can put together that I could take a look at, just to get up and running, I'd be extremely grateful! Seems to me a really major "gap in the market" in Node-RED!

Hello Mat,

Instead of writing yet-another-ffmpeg-node, I have compared the existing Node-RED contributions:

  1. node-red-ffmpeg has some disadvantages:

    • It keeps only track of 1 (spawned) proces. So if you start a second one in parallel, you cannot close the first one decently.
    • The readme page shows that the functionality is limited
    • The ffmpeg_process.stdout.on will append data (to the previous data) until the response is complete. However I 'think' this might be a problem when you have an infinite RTSP stream (where we want an output message for each image).
  2. node-red-contrib-dynamorse-ffmpeg are rather specific nodes (e.g. AACencoder …), which I 'think' are a bit too specific for our purpose.

  3. node-red-contrib-princip-ffmpeg is experimental.

  4. node-red-contrib-media-utils contains a.o. ffmpeg nodes. The disadvantages:

    • The nodes have rather specific functionality (e.g. only for specific audio formats)
    • I 'think' these nodes create a input file --> let ffmpeg process that input file --> read the output file that ffmpeg has created. If I'm right, a lot of disc IO is involved. That is not what we want, since we only want to pass data via memory...
  5. node-red-contrib-viseo-ffmpeg seem to be the right choice for our purpose, since it has a lot of advantages:

    • It keeps track of all (spawned) processes, so they can all be handled correctly (e.g. close all the streams automatically when the flow is redeployed).
    • As soon as data has arrived, it will send an output message (which is exactly what we need for every image in an infinite RTSP stream).
    • All data between ffmpeg and Node-RED is passed via memory (so no disc IO). Indeed the ffmpeg process passes data to Node-RED via the stdout stream/pipe, and errors via the stderr stream/pipe.

    But this node also has a disadvantage: two other nodes (that the node depends on), have to be installed manually. I have logged an issue about this, and hopefully we get a solution. In the issue you can find a temporary workaround...

Another disadvantage (in all 5 cases) is that ffmpeg needs to be installed manually, which is a rather slow process. I found that ffmpeg-binaries could be used to install pre-build binaries of ffmpeg on a series of known platforms. However it failed on my Raspberry Pi 3, which I 'think' might be related to this issue. Summarized currently you will have to install ffmpeg manually...

Here is my test flow:
image

[{"id":"86ab3e95.61eb5","type":"ffmpeg-command","z":"18e47039.88483","name":"RTSP via FFMPEG","output":"payload","outputType":"msg","cmd":"-i \"rtsp://184.72.239.149/vod/mp4:BigBuckBunny_175k.mov\" -f image2pipe -hls_time 3 -hls_wrap 10 pipe:1","cmdType":"str","spawn":"true","x":1390,"y":200,"wires":[["377fe881.b68d18"]]},{"id":"f89dff57.9f288","type":"inject","z":"18e47039.88483","name":"","topic":"","payload":"","payloadType":"date","repeat":"","crontab":"","once":false,"onceDelay":0.1,"x":1200,"y":200,"wires":[["86ab3e95.61eb5"]]},{"id":"377fe881.b68d18","type":"function","z":"18e47039.88483","name":"Move output","func":"if (msg.payload.stdout) {\n    msg.payload = msg.payload.stdout;\n    return msg;\n}\n","outputs":1,"noerr":0,"x":1590,"y":200,"wires":[["5186e7cf.8be3e8"]]},{"id":"5186e7cf.8be3e8","type":"image","z":"18e47039.88483","name":"","width":200,"x":1771,"y":200,"wires":[]}]

The core of the solution is the list of arguments that I pass to the ffmpeg executable:

-i "rtsp://184.72.239.149/vod/mp4:BigBuckBunny_175k.mov" -f image2pipe -hls_time 3 -hls_wrap 10 pipe:1

Some explanation of that argument list:

  • The input '-i' will be the rtsp link
  • Since we have no output file, ffmpeg cannot 'guess' the output format from the file extension (e.g. jpeg). Therefore we will have to specify that the format '-f' will be an image to a pipe (image2pipe).
  • Then there are some parameters about the rtsp stream (hls_time, hls_wrap).
  • At the end we will have specify where the output (i.e. the images) will need to go to. In this case the output will need to go to the first pipe (pipe:1), which means stdout.

The argument list is very complex to create, but it is very powerful. We could create a dedicated node-red-contrib-rtsp node, that hides this complex line to the user. However there are an enourmous amount of different argument combinations possible (to do all kind of great audio/video conversions), so - in my humble opinion - it becomes unmaintainable to create a dedicated node for every purpose. Therefore I think such a generic reusable ffmpeg node is a better solution.

Although it should be nice if we had some kind of central place where people could share there argument lists for the ffmpeg node? Don't know if this forum is the best place ??

Have fun with it!!
Bart

2 Likes

Hey Mat,

In case of infinite streams from camera's, it might be useful to have the following RTSP functionality in Node-RED available:

  • pause: temporarily halt one or all media streams
  • teardown: stop on or all media streams
  • play: resume one or all paused media streams

I will need to create a pull request to have this implemented, however I need to figure out some things before I can do that:

  • Haven't found any functionality in ffmpeg to acomplish this. The only thing I found is this Stackoverflow discussion. So I think (???) it needs to be implemented like this:
    • Pausing (suspending) a stream:
      process.kill(process.pid, 'SIGSTOP');
      
    • Play (resuming) a paused stream:
      process.kill(process.pid, 'SIGCONT ');
      
    • Teardown (stopping) a stream:
      process.kill(process.pid, 'SIGKILL ');
      
    Not really at RTSP-protocol level, but I don't see another way ...
  • Not sure how our input message needs to look like, to give the ffmpeg node those instructions. We need to specify somewhere in the input message both a command (pause/resume/stop) and a process id.
  • Perhaps we can ask to resume/pause/teardown all streams when no process id is specified in the input message.
  • The ffmpeg node should send the process id somewhere in the output messages. That way we know which PID we need to specify in our input messages, to trigger commands...

As usual all 'constructive' feedback is very welcome !!
Bart

Hej Bart,
Like most video softwares, ffmpeg is used and a dependency. As long as you rely on this and external calls to executables, I do not think it is "more integrated inside Node-RED" than any other external softwares that you easily can communicate with.

Hi Walter,

That is absolutely true what you say! But I ment this (correct me if anything is incorrect !!):

  • Motion can support rtsp, since it is based itself on ffmpeg. In that case I like to call ffmpeg directly, unless motion offers some extra rtsp functionality (that we really need), which is not offered by ffmpeg itself ...
  • At the time of writing, I hoped that ffmpeg-binaries would release our users from manually installing ffmpeg. When that node works again correctly, I can add ffmpeg itself as a dependency (so the users can easily install ffmpeg via the Node-RED flow editor).

When I'm going to implement motion-detection into Node-RED in the (near) future, I will most certainly investigate to integrate the Motion project into Node-RED! Unless they do their motion detection via OpenCv, then I will call OpenCv directly :rofl::joy: But to finish with a happy end: the Motion project has implemented its own motion detection algorithms ...

Bart

apologies for butting in late... - does this offer any help ? https://www.npmjs.com/package/easy-ffmpeg

Hey Dave,
You are always welcome! The party ain't over yet ...

I had also tried easy-ffmpeg, but that gave me not the desired result:

But perhaps I'm using it incorrectly. I see that in the package.json file they call (as post-install step) the install.js, and there they test whether the installation is correct:

image

So I assume it is installed and available via fluent-ffmpeg, but I don't know why the ffmpeg command is not recognized (not on system path ?). This is the only thing I can find on my Raspberry, and it is old stuff (from 2016):
image

[EDIT] I also tried this node, but same result:

This latter node doesn't support Raspberry's ARM processor, because this code snippet:

    const ffmpeg = require('@ffmpeg-installer/ffmpeg');
    console.log(ffmpeg.path, ffmpeg.version);

Results in " Unsupported platform/architecture: linux-arm"...

That's a bit hopeless isn't it ! - though maybe worth an ask/issue on that project as they are updating it and claiming it's easy... so... :slight_smile:

Hey Dave,
I have created an issue for the latter project, since it is more actively maintained and has much more downloads. Have added some extra information in the issue, so hopefully I get a (positive) answer soon...

P.S. If you (or anybody else) have any advise for my questions above, please be my guest !!!! Then I can prepare a pull request...