How to display CCTV camera in dashboard (RTSP)

sorry - another diversion... how about ? https://www.npmjs.com/package/@ffmpeg-installer/ffmpeg

def has the actual binary this time (not tried on Pi though)

Heuh, that is weird. That is the same repository where I have created my issue.
But indeed when I look where he gets his binaries, the 3 types of ARM binaries are also available.
I already got feedback from the author: his script doesn't currently detect ARM processors, but he is not against adding it to his node...

I do not want to convince anyone what they shall use, only thing I wanted to share is my experience and that Motion is fulfilling the needs I have.

To simplify installation there are also binaries available for most used platforms and Debian versions (including Stretch) which makes it a no-brainer
https://motion-project.github.io/motion_build.html

You communicate with Motion using http. In my case, I wanted some more control like checking that the Motion process is running, watchdog and other stuff so I have a continuously running python script as bridge between Motion and NR

Motion <-- http --> My Script <-- MQTT --> NR

This solution is really working great

Motion is able to detect motion and has built-in algoritms for it. You can configure a number of params to reduce false alarms and this helps to a great extent. But Motion is NOT able to do object classification and/or identification. For that purpose I have a DNN analyzer (discussed already in another thread). So when Motion simply detects motion, pictures are sent further for analyze. The outcome of the analyze will decide if an event is sent to my phone or not

Motion --> picture with motion --> DNN Analyzer --> Wanted object detected (e.g. human) --> event w picture --> Telegram

But as I said, everyone needs to do what feels best. For me, this was just a perfect match

3 Likes

MotionEyeOS is basically a system with "just enough Linux to run Motion" and a web based UI for setup and control. It works quite well, I don't use it anymore but I'd like to point out that there is work to add the AI to motion by some of its motioneyeos users:

The motioneyeos thread where the ideas came together:

The github for the motion mods:

I'd also like to point out that Intel has just released verison 5 of their OpenVINO toolkit shortly after the release of the Movidius NCS2 stick. My tests with OpenVINO v4 on an i3 CPU showed about 4X speedup of the NCS2 over the original for about 30% more $. OpenVINO v4 didn't support Raspberry Pi (or ARM), but v5 is supposed to. I just downloaded it and it will become a priority for me after Xmas.

My AI just alerted me to the front door, where Amazon dropped off a package, kind of annoying that the driver doesn't even bother to ring the doorbell, but I had the package before he was back in the truck :slight_smile:

1 Like

Thanks to everyone for the input here!

Sorry I didn't quite follow your points to date, what's the simplest way you found to get an RSTP stream showing?

Summarized:

  • The simplest way I found to stream RTSP in Node-RED is the flow above which is based on node-red-contrib-viseo-ffmpeg.
  • But I would also like ffmpeg to be installed automatically on my Raspberry, so a pull request for ffmpeg-installer is required (to support ARM processors).
  • And I would like to be able to pause/stop/resume RTSP stream, so I need to create a pull request for node-red-contrib-viseo-ffmpeg.

But Walter has a nice solution based on Motion, so that is another alternative if you like that more...

@BartButenaers Ahh, now I understand more. Thanks. I checked out the issue you raised on GH and have followed the instructions for what you did. I managed to pull the stream up using the image node, however the picture is flashing. I guess I need to change the values for -hls_wrap and -hls_time, are these stream controls? How did you determine these?

My videolan command doesn't refer to wrap or time parameters, so I can't get it working.

Secondly, any idea how to pipe that output to the dashboard?

Thanks for your help!

The only time it was flashing in my case, was because I had injected two input messages (by accidentally pressing the inject button twice). Then two streams are started and the images are mixed displayed. But I assume that won't be your case ...

Just copied them somewhere from one or another tutorial. I haven't had time yet to look at the settings in more detail. So be my guest if you have some spare time, and please share your results here ...

I would advice not to push the images to the dashboard via the websocket channel, unless you wan't to run into problems. In the following link you can find some ways to do it.

Just reporting back on this. So far I've had no luck actually displaying CCTV on my Node-RED dashboard.

My preferred course of action will be to run vlc as a system service in Windows using something like nssm (service manager), to serve up the RTSP stream as an mjpeg one from Windows. I'll run an instance for each camera. This is a bit of a faff, but honestly seems quicker / more dependable than other options. As Burt says, each person has to use the option they feel comfortable with.

But here's what I tried:

  • I installed viseo-ffmpeg, and as per @BartButenaers's post on the GitHub page for this package, I had to also install two dependencies manually, but appreciate that Bart has raised this issue and hopefully it will be possible for this to be automated. Although I was able to show some moving image (in the Node-RED admin panel using his flow posted above), the picture from my HikVision camera was showing as distorted. I then passed it through the Encode node (multipart encoder) to show the output on Dashboard, which did the same thing - shown in screenshot below. In addition to this, the picture was flashing.

  • I installed Motion as suggested by @krambriw. Seems like good software, but like most open source linux projects (NodeRED not included in this statement), the documentation suffers heavily from assuming that the person using it is very proficient with Linux, and/or wishes to invest a lot of time in getting it working. I installed using apt-get install motion, this was indeed the easy part, but then proceeded to the configuration step in the install instructions, and basically gave up. This section has so many paragraphs that start with "if you have xyz" ... (most of which I can't answer, because I know nothing about the software) - basically way too much to learn, just to launch an application. And when that's done and the server reboots itself after a power cut in 40 days, it won't work until I restart the software manually, so I then have to remember where the config files are stored just to launch it ... etc. etc.

So I'm no closer to being able to view CCTV from the dashboard :frowning:

Here's the pic from ffmpeg. When I load the feed in VideoLAN on Windows, it shows with no problems, so I'm not sure what's going on here:

Honestly, I think you do not need to change that much in the configuration, the defaults should be fine for most except that you will have to configure the url's to your ip-cams and follow the guide and create a separate config file for each of your cameras

And regarding autostart, in Linux there are many ways. I have a line in my crontab setting pointing to the correct configuration file

@reboot sleep 30 && sudo motion -nm -c /etc/motion/motion.conf

And yes, it is a lot "new" things to learn with Linux, As it was once with DOS & Windows. In addition you will need know-how about html, javascript, eventually python and bash, Linux commands and and and

I tell you, I did spend some time, sweat & tears myself and I am still at the beginning it feels

Can you please post a screenshot of the distorted image in the flow editor (on the image-output node, not on the dashboard). Probably the rtsp command parameters from my test flow are not good. As I said before, it is just a test url that I found somewhere random on the web to test my developments... Did you enter exactly the same url in VLC, to get a good result? In that case I would expect the same result, but I'm not an RTSP expert...

To get a more recent version you should install according to this (was in the link I posted earlier):

Installing with a release deb package

Motion can also be installed from the release deb files which may provide a more recent version than what is available via apt.Determine the deb file name that is appropriate for the distribution and platform from the Releases page and open up a terminal window and type: wget https://github.com/Motion-Project/motion/releases/{deb package name}
Next, install the retrieved deb package. Below is a sample method to install that uses the gdebi tool. sudo apt-get install gdebi-core sudo gdebi {deb package name}

Seems this kind of distortion is called image smearing. The default UDP buffer size that ffmpeg uses is not large enough to hold an entire HD image frame. As a result, the received image will be smeared down from some point vertically (on some frames). If you have network issues (non-ffmpeg) related, UDP will higlight them via these distortions (as this guy with Hikvision camera's explains).

Some related articles about it (1, 2), and a document with some solutions.

Based on these articles, the problem could be solved by explicit specifying that TCP should be used (instead of UDP):

-rtsp_transport tcp -i "rtsp://184.72.239.149/vod/mp4:BigBuckBunny_175k.mov" -f image2pipe -hls_time 3 -hls_wrap 10 pipe:1

Would be nice if you could test this, even when you are going to use VLC or Motion (so other users could benefit from our discussion afterwards) ...

1 Like

I have just reinstalled the relevant nodes to test this. I'm still getting the same problem. Earlier you asked for a screenshot of the image in the flow, not the dashboard. I wasn't able to post this because it was too flashy, I tried for a long time to press the screenshot button at exactly the same time as the image flashed on, anyway in the end I took a video.

This video shows with the new tcp settings: https://photos.app.goo.gl/KYkjXkVZtxgsdGwx6

Hey Bart,
I tried it with just a simple usb cam (as well as with your movie sample). I think the CPU load becomes too high running this on a RPi3. If I compare with Motion, same camera, same fps, the CPU load is just one third of what ffmpeg requires. Anyway, nice try.

Have a Nice Christmas

Hey Walter, that is very intersting info!!!!! So you guys have two solutions that work fine:

  • VLC media player: it uses ffmpeg under the hood, but I think they use the LiveMedia library for RTSP (instead of ffmpeg). So LiveMedia 'could' indeed be faster than Fmpeg...
  • Motion project: from their code I understand that they really use Ffmpeg under the hood for RTSP. Therefore I don't understand that Motion is 3x faster compared to Ffmpeg :woozy_face: I assume you cannot see somewhere in Motion which ffmpeg command is being used???

Pffff ....

For you also a Very Nice Christmas!!!!!!!!!!!!!!!

That "smeared" image does look like very high resolution... maybe cutting back the resolution somewhat would speed up the processing and allow it to complete within one frame.

2 Likes

Indeed Dave, that was also something I was thinking about. The Motion software is able to get the stream on the SAME raspberry with the SAME resolution with ffmpeg, but 3 times faster!. This can only mean there is somewhere a 'major' difference.

  • Perhaps ffmpeg is build another way
  • Or ffmpeg is called with other parameters
  • ...

As an experiment, I increased the fps to 25 and it got even worse.

Running Motion gives approx 17% CPU load (shown in top). The viewer is a Chrome browser, video stream via http. Picture shown is still useful == updated fast enough, movements are delayed about 1-2 seconds until they are visible in the browser

With NR and ffmeg sample project, the CPU load is around 99% and result is really not useable for live viewing unless you are fine waiting 15-20 seconds for movements to be visible

Now the thing is, for the moment I just have an ordinarie usb web cam. You might get better results with an ip-camera able to provide an mpeg4 stream. But I remember when I had such an ip-cam available some weeks ago that the time required for updates to "pass-through" the whole way to your viewer/browser was heavily dependent on resolution and fps

Anyway, now there is soon time to relax

I wish to take the opportunity to say thanks to all great people in this fantastic forum
Wish you all A Merry Christmas and a Happy New Year

1 Like

Ah, perhaps something else is using lots of cpu?? Could it be that things get better if you remove the image-output node? I'm afraid it won't help ...

Hey Walter, this kind of stuff is my way to relax :wink:

I would hate having stop my ffmpeg developments at this point ...
Have been preparing last night a pull-request for pause/resume/stop the node-red-contrib-viseo-ffmpeg node, and it seems to be working fine:

So if you don't mind, I'm going to continue spamming a bit more here, before I give up :woozy_face:

1 Like