How to display CCTV camera in dashboard (RTSP)

I'm interested in some of the details of how you are doing this, I have a commercial (Lorex) 16 channel "security DVR", its documentation is terrible, I've recently after a lot of Googling and trial and error discovered I can get rtsp streams (look good in VLC) or jpeg snapshots from "magic" Urls. Their tech support never bothered to answer my inquiries.

I'm currently feeding my AI system with "snapshot" jpegs from the system via its "ftp motion detection" feature, the main problem is the latency, it can take as long a 4-6 seconds for a snapshot to come in after a "motion trigger" event. I don't need high frame rate, but using the snapshot Urls it looks like snapshots are only updated about once every two seconds, so while I can get a snapshot quickly it could be two seconds old.

Now I'm playing around with using the rtsp Urls and the openCV code I'd posted as a starting point. I've used motion/motioneye in the past. My concern is 16 rtsp streams might be a bit heavyweight for grabbing one frame about ever 0.7 seconds from each stream. Obviously I'm interested if motion could be a nice time saving shortcut for me.

I configured an ip-camera with rtsp in Motion, worked fine.

netcam_url = rtsp://192.168.10.108
netcam_highres = (not defined)
netcam_userpass = admin:1234

Regarding latency, I did realize that the camera configuration had a huge impact. AT first, I thought I would like to have a massive amount of frames just to get early warning of an event. The effect was the opposite. The higher frame rate you configure in the camera, the latency will grove (since all the frames are buffered and therefore it will take a while until you will see the actual event). The same if you define a high resolution, this will increase the latency -> more data to push through the network before you see the interesting parts

Hi Mat,

Indeed like you have seen, my multipart decoder is designed to convert an MJPEG stream to separate JPEG images. However that has nothing to do with rtsp streams, since that is a completely other protocol.

Like Walter proposes, using external software (e.g. Motion) will be an easy and powerful solution. However I really like to have my stuff a bit more integrated inside Node-RED, but that is just my personal opinion...

The NodeJs RTSP (Javascript) libraries are rather limited, so FFmpeg is minimal required to get the job done.
I did a quick test this evening with a custom RTSP decoding node, based on FFMPEG:

rtsp

Seems to be working fine. I think such a node might be useful for other users, so I will try to find some spare time to make it more user-friendly ... As soon as I have something ready, I will discuss it on this forum (before publishing it on NPM).

Bart

6 Likes

Hi Bart

That would be superb. I'm going to be really strict with myself about trying not to make external customisations to my automation setup, because the more of this you do, the less permanent it becomes.

So thank you - if there's anything you can put together that I could take a look at, just to get up and running, I'd be extremely grateful! Seems to me a really major "gap in the market" in Node-RED!

Hello Mat,

Instead of writing yet-another-ffmpeg-node, I have compared the existing Node-RED contributions:

  1. node-red-ffmpeg has some disadvantages:

    • It keeps only track of 1 (spawned) proces. So if you start a second one in parallel, you cannot close the first one decently.
    • The readme page shows that the functionality is limited
    • The ffmpeg_process.stdout.on will append data (to the previous data) until the response is complete. However I 'think' this might be a problem when you have an infinite RTSP stream (where we want an output message for each image).
  2. node-red-contrib-dynamorse-ffmpeg are rather specific nodes (e.g. AACencoder …), which I 'think' are a bit too specific for our purpose.

  3. node-red-contrib-princip-ffmpeg is experimental.

  4. node-red-contrib-media-utils contains a.o. ffmpeg nodes. The disadvantages:

    • The nodes have rather specific functionality (e.g. only for specific audio formats)
    • I 'think' these nodes create a input file --> let ffmpeg process that input file --> read the output file that ffmpeg has created. If I'm right, a lot of disc IO is involved. That is not what we want, since we only want to pass data via memory...
  5. node-red-contrib-viseo-ffmpeg seem to be the right choice for our purpose, since it has a lot of advantages:

    • It keeps track of all (spawned) processes, so they can all be handled correctly (e.g. close all the streams automatically when the flow is redeployed).
    • As soon as data has arrived, it will send an output message (which is exactly what we need for every image in an infinite RTSP stream).
    • All data between ffmpeg and Node-RED is passed via memory (so no disc IO). Indeed the ffmpeg process passes data to Node-RED via the stdout stream/pipe, and errors via the stderr stream/pipe.

    But this node also has a disadvantage: two other nodes (that the node depends on), have to be installed manually. I have logged an issue about this, and hopefully we get a solution. In the issue you can find a temporary workaround...

Another disadvantage (in all 5 cases) is that ffmpeg needs to be installed manually, which is a rather slow process. I found that ffmpeg-binaries could be used to install pre-build binaries of ffmpeg on a series of known platforms. However it failed on my Raspberry Pi 3, which I 'think' might be related to this issue. Summarized currently you will have to install ffmpeg manually...

Here is my test flow:
image

[{"id":"86ab3e95.61eb5","type":"ffmpeg-command","z":"18e47039.88483","name":"RTSP via FFMPEG","output":"payload","outputType":"msg","cmd":"-i \"rtsp://184.72.239.149/vod/mp4:BigBuckBunny_175k.mov\" -f image2pipe -hls_time 3 -hls_wrap 10 pipe:1","cmdType":"str","spawn":"true","x":1390,"y":200,"wires":[["377fe881.b68d18"]]},{"id":"f89dff57.9f288","type":"inject","z":"18e47039.88483","name":"","topic":"","payload":"","payloadType":"date","repeat":"","crontab":"","once":false,"onceDelay":0.1,"x":1200,"y":200,"wires":[["86ab3e95.61eb5"]]},{"id":"377fe881.b68d18","type":"function","z":"18e47039.88483","name":"Move output","func":"if (msg.payload.stdout) {\n    msg.payload = msg.payload.stdout;\n    return msg;\n}\n","outputs":1,"noerr":0,"x":1590,"y":200,"wires":[["5186e7cf.8be3e8"]]},{"id":"5186e7cf.8be3e8","type":"image","z":"18e47039.88483","name":"","width":200,"x":1771,"y":200,"wires":[]}]

The core of the solution is the list of arguments that I pass to the ffmpeg executable:

-i "rtsp://184.72.239.149/vod/mp4:BigBuckBunny_175k.mov" -f image2pipe -hls_time 3 -hls_wrap 10 pipe:1

Some explanation of that argument list:

  • The input '-i' will be the rtsp link
  • Since we have no output file, ffmpeg cannot 'guess' the output format from the file extension (e.g. jpeg). Therefore we will have to specify that the format '-f' will be an image to a pipe (image2pipe).
  • Then there are some parameters about the rtsp stream (hls_time, hls_wrap).
  • At the end we will have specify where the output (i.e. the images) will need to go to. In this case the output will need to go to the first pipe (pipe:1), which means stdout.

The argument list is very complex to create, but it is very powerful. We could create a dedicated node-red-contrib-rtsp node, that hides this complex line to the user. However there are an enourmous amount of different argument combinations possible (to do all kind of great audio/video conversions), so - in my humble opinion - it becomes unmaintainable to create a dedicated node for every purpose. Therefore I think such a generic reusable ffmpeg node is a better solution.

Although it should be nice if we had some kind of central place where people could share there argument lists for the ffmpeg node? Don't know if this forum is the best place ??

Have fun with it!!
Bart

3 Likes

Hey Mat,

In case of infinite streams from camera's, it might be useful to have the following RTSP functionality in Node-RED available:

  • pause: temporarily halt one or all media streams
  • teardown: stop on or all media streams
  • play: resume one or all paused media streams

I will need to create a pull request to have this implemented, however I need to figure out some things before I can do that:

  • Haven't found any functionality in ffmpeg to acomplish this. The only thing I found is this Stackoverflow discussion. So I think (???) it needs to be implemented like this:
    • Pausing (suspending) a stream:
      process.kill(process.pid, 'SIGSTOP');
      
    • Play (resuming) a paused stream:
      process.kill(process.pid, 'SIGCONT ');
      
    • Teardown (stopping) a stream:
      process.kill(process.pid, 'SIGKILL ');
      
    Not really at RTSP-protocol level, but I don't see another way ...
  • Not sure how our input message needs to look like, to give the ffmpeg node those instructions. We need to specify somewhere in the input message both a command (pause/resume/stop) and a process id.
  • Perhaps we can ask to resume/pause/teardown all streams when no process id is specified in the input message.
  • The ffmpeg node should send the process id somewhere in the output messages. That way we know which PID we need to specify in our input messages, to trigger commands...

As usual all 'constructive' feedback is very welcome !!
Bart

Hej Bart,
Like most video softwares, ffmpeg is used and a dependency. As long as you rely on this and external calls to executables, I do not think it is "more integrated inside Node-RED" than any other external softwares that you easily can communicate with.

Hi Walter,

That is absolutely true what you say! But I ment this (correct me if anything is incorrect !!):

  • Motion can support rtsp, since it is based itself on ffmpeg. In that case I like to call ffmpeg directly, unless motion offers some extra rtsp functionality (that we really need), which is not offered by ffmpeg itself ...
  • At the time of writing, I hoped that ffmpeg-binaries would release our users from manually installing ffmpeg. When that node works again correctly, I can add ffmpeg itself as a dependency (so the users can easily install ffmpeg via the Node-RED flow editor).

When I'm going to implement motion-detection into Node-RED in the (near) future, I will most certainly investigate to integrate the Motion project into Node-RED! Unless they do their motion detection via OpenCv, then I will call OpenCv directly :rofl::joy: But to finish with a happy end: the Motion project has implemented its own motion detection algorithms ...

Bart

apologies for butting in late... - does this offer any help ? https://www.npmjs.com/package/easy-ffmpeg

Hey Dave,
You are always welcome! The party ain't over yet ...

I had also tried easy-ffmpeg, but that gave me not the desired result:

But perhaps I'm using it incorrectly. I see that in the package.json file they call (as post-install step) the install.js, and there they test whether the installation is correct:

image

So I assume it is installed and available via fluent-ffmpeg, but I don't know why the ffmpeg command is not recognized (not on system path ?). This is the only thing I can find on my Raspberry, and it is old stuff (from 2016):
image

[EDIT] I also tried this node, but same result:

This latter node doesn't support Raspberry's ARM processor, because this code snippet:

    const ffmpeg = require('@ffmpeg-installer/ffmpeg');
    console.log(ffmpeg.path, ffmpeg.version);

Results in " Unsupported platform/architecture: linux-arm"...

That's a bit hopeless isn't it ! - though maybe worth an ask/issue on that project as they are updating it and claiming it's easy... so... :slight_smile:

Hey Dave,
I have created an issue for the latter project, since it is more actively maintained and has much more downloads. Have added some extra information in the issue, so hopefully I get a (positive) answer soon...

P.S. If you (or anybody else) have any advise for my questions above, please be my guest !!!! Then I can prepare a pull request...

sorry - another diversion... how about ? https://www.npmjs.com/package/@ffmpeg-installer/ffmpeg

def has the actual binary this time (not tried on Pi though)

Heuh, that is weird. That is the same repository where I have created my issue.
But indeed when I look where he gets his binaries, the 3 types of ARM binaries are also available.
I already got feedback from the author: his script doesn't currently detect ARM processors, but he is not against adding it to his node...

I do not want to convince anyone what they shall use, only thing I wanted to share is my experience and that Motion is fulfilling the needs I have.

To simplify installation there are also binaries available for most used platforms and Debian versions (including Stretch) which makes it a no-brainer
https://motion-project.github.io/motion_build.html

You communicate with Motion using http. In my case, I wanted some more control like checking that the Motion process is running, watchdog and other stuff so I have a continuously running python script as bridge between Motion and NR

Motion <-- http --> My Script <-- MQTT --> NR

This solution is really working great

Motion is able to detect motion and has built-in algoritms for it. You can configure a number of params to reduce false alarms and this helps to a great extent. But Motion is NOT able to do object classification and/or identification. For that purpose I have a DNN analyzer (discussed already in another thread). So when Motion simply detects motion, pictures are sent further for analyze. The outcome of the analyze will decide if an event is sent to my phone or not

Motion --> picture with motion --> DNN Analyzer --> Wanted object detected (e.g. human) --> event w picture --> Telegram

But as I said, everyone needs to do what feels best. For me, this was just a perfect match

3 Likes

MotionEyeOS is basically a system with "just enough Linux to run Motion" and a web based UI for setup and control. It works quite well, I don't use it anymore but I'd like to point out that there is work to add the AI to motion by some of its motioneyeos users:

The motioneyeos thread where the ideas came together:

The github for the motion mods:

I'd also like to point out that Intel has just released verison 5 of their OpenVINO toolkit shortly after the release of the Movidius NCS2 stick. My tests with OpenVINO v4 on an i3 CPU showed about 4X speedup of the NCS2 over the original for about 30% more $. OpenVINO v4 didn't support Raspberry Pi (or ARM), but v5 is supposed to. I just downloaded it and it will become a priority for me after Xmas.

My AI just alerted me to the front door, where Amazon dropped off a package, kind of annoying that the driver doesn't even bother to ring the doorbell, but I had the package before he was back in the truck :slight_smile:

1 Like

Thanks to everyone for the input here!

Sorry I didn't quite follow your points to date, what's the simplest way you found to get an RSTP stream showing?

Summarized:

  • The simplest way I found to stream RTSP in Node-RED is the flow above which is based on node-red-contrib-viseo-ffmpeg.
  • But I would also like ffmpeg to be installed automatically on my Raspberry, so a pull request for ffmpeg-installer is required (to support ARM processors).
  • And I would like to be able to pause/stop/resume RTSP stream, so I need to create a pull request for node-red-contrib-viseo-ffmpeg.

But Walter has a nice solution based on Motion, so that is another alternative if you like that more...

@BartButenaers Ahh, now I understand more. Thanks. I checked out the issue you raised on GH and have followed the instructions for what you did. I managed to pull the stream up using the image node, however the picture is flashing. I guess I need to change the values for -hls_wrap and -hls_time, are these stream controls? How did you determine these?

My videolan command doesn't refer to wrap or time parameters, so I can't get it working.

Secondly, any idea how to pipe that output to the dashboard?

Thanks for your help!

The only time it was flashing in my case, was because I had injected two input messages (by accidentally pressing the inject button twice). Then two streams are started and the images are mixed displayed. But I assume that won't be your case ...

Just copied them somewhere from one or another tutorial. I haven't had time yet to look at the settings in more detail. So be my guest if you have some spare time, and please share your results here ...

I would advice not to push the images to the dashboard via the websocket channel, unless you wan't to run into problems. In the following link you can find some ways to do it.