How to display CCTV camera in dashboard (RTSP)


#17

apologies for butting in late... - does this offer any help ? https://www.npmjs.com/package/easy-ffmpeg


#18

Hey Dave,
You are always welcome! The party ain't over yet ...

I had also tried easy-ffmpeg, but that gave me not the desired result:

But perhaps I'm using it incorrectly. I see that in the package.json file they call (as post-install step) the install.js, and there they test whether the installation is correct:

image

So I assume it is installed and available via fluent-ffmpeg, but I don't know why the ffmpeg command is not recognized (not on system path ?). This is the only thing I can find on my Raspberry, and it is old stuff (from 2016):
image

[EDIT] I also tried this node, but same result:

This latter node doesn't support Raspberry's ARM processor, because this code snippet:

    const ffmpeg = require('@ffmpeg-installer/ffmpeg');
    console.log(ffmpeg.path, ffmpeg.version);

Results in " Unsupported platform/architecture: linux-arm"...


#19

That's a bit hopeless isn't it ! - though maybe worth an ask/issue on that project as they are updating it and claiming it's easy... so... :slight_smile:


#20

Hey Dave,
I have created an issue for the latter project, since it is more actively maintained and has much more downloads. Have added some extra information in the issue, so hopefully I get a (positive) answer soon...

P.S. If you (or anybody else) have any advise for my questions above, please be my guest !!!! Then I can prepare a pull request...


#21

sorry - another diversion... how about ? https://www.npmjs.com/package/@ffmpeg-installer/ffmpeg

def has the actual binary this time (not tried on Pi though)


#22

Heuh, that is weird. That is the same repository where I have created my issue.
But indeed when I look where he gets his binaries, the 3 types of ARM binaries are also available.
I already got feedback from the author: his script doesn't currently detect ARM processors, but he is not against adding it to his node...


#23

I do not want to convince anyone what they shall use, only thing I wanted to share is my experience and that Motion is fulfilling the needs I have.

To simplify installation there are also binaries available for most used platforms and Debian versions (including Stretch) which makes it a no-brainer
https://motion-project.github.io/motion_build.html

You communicate with Motion using http. In my case, I wanted some more control like checking that the Motion process is running, watchdog and other stuff so I have a continuously running python script as bridge between Motion and NR

Motion <-- http --> My Script <-- MQTT --> NR

This solution is really working great

Motion is able to detect motion and has built-in algoritms for it. You can configure a number of params to reduce false alarms and this helps to a great extent. But Motion is NOT able to do object classification and/or identification. For that purpose I have a DNN analyzer (discussed already in another thread). So when Motion simply detects motion, pictures are sent further for analyze. The outcome of the analyze will decide if an event is sent to my phone or not

Motion --> picture with motion --> DNN Analyzer --> Wanted object detected (e.g. human) --> event w picture --> Telegram

But as I said, everyone needs to do what feels best. For me, this was just a perfect match


#24

MotionEyeOS is basically a system with "just enough Linux to run Motion" and a web based UI for setup and control. It works quite well, I don't use it anymore but I'd like to point out that there is work to add the AI to motion by some of its motioneyeos users:

The motioneyeos thread where the ideas came together:

The github for the motion mods:

I'd also like to point out that Intel has just released verison 5 of their OpenVINO toolkit shortly after the release of the Movidius NCS2 stick. My tests with OpenVINO v4 on an i3 CPU showed about 4X speedup of the NCS2 over the original for about 30% more $. OpenVINO v4 didn't support Raspberry Pi (or ARM), but v5 is supposed to. I just downloaded it and it will become a priority for me after Xmas.

My AI just alerted me to the front door, where Amazon dropped off a package, kind of annoying that the driver doesn't even bother to ring the doorbell, but I had the package before he was back in the truck :slight_smile:


#25

Thanks to everyone for the input here!

Sorry I didn't quite follow your points to date, what's the simplest way you found to get an RSTP stream showing?


#26

Summarized:

  • The simplest way I found to stream RTSP in Node-RED is the flow above which is based on node-red-contrib-viseo-ffmpeg.
  • But I would also like ffmpeg to be installed automatically on my Raspberry, so a pull request for ffmpeg-installer is required (to support ARM processors).
  • And I would like to be able to pause/stop/resume RTSP stream, so I need to create a pull request for node-red-contrib-viseo-ffmpeg.

But Walter has a nice solution based on Motion, so that is another alternative if you like that more...


#27

@BartButenaers Ahh, now I understand more. Thanks. I checked out the issue you raised on GH and have followed the instructions for what you did. I managed to pull the stream up using the image node, however the picture is flashing. I guess I need to change the values for -hls_wrap and -hls_time, are these stream controls? How did you determine these?

My videolan command doesn't refer to wrap or time parameters, so I can't get it working.

Secondly, any idea how to pipe that output to the dashboard?

Thanks for your help!


#28

The only time it was flashing in my case, was because I had injected two input messages (by accidentally pressing the inject button twice). Then two streams are started and the images are mixed displayed. But I assume that won't be your case ...

Just copied them somewhere from one or another tutorial. I haven't had time yet to look at the settings in more detail. So be my guest if you have some spare time, and please share your results here ...

I would advice not to push the images to the dashboard via the websocket channel, unless you wan't to run into problems. In the following link you can find some ways to do it.


#29

Just reporting back on this. So far I've had no luck actually displaying CCTV on my Node-RED dashboard.

My preferred course of action will be to run vlc as a system service in Windows using something like nssm (service manager), to serve up the RTSP stream as an mjpeg one from Windows. I'll run an instance for each camera. This is a bit of a faff, but honestly seems quicker / more dependable than other options. As Burt says, each person has to use the option they feel comfortable with.

But here's what I tried:

  • I installed viseo-ffmpeg, and as per @BartButenaers's post on the GitHub page for this package, I had to also install two dependencies manually, but appreciate that Bart has raised this issue and hopefully it will be possible for this to be automated. Although I was able to show some moving image (in the Node-RED admin panel using his flow posted above), the picture from my HikVision camera was showing as distorted. I then passed it through the Encode node (multipart encoder) to show the output on Dashboard, which did the same thing - shown in screenshot below. In addition to this, the picture was flashing.

  • I installed Motion as suggested by @krambriw. Seems like good software, but like most open source linux projects (NodeRED not included in this statement), the documentation suffers heavily from assuming that the person using it is very proficient with Linux, and/or wishes to invest a lot of time in getting it working. I installed using apt-get install motion, this was indeed the easy part, but then proceeded to the configuration step in the install instructions, and basically gave up. This section has so many paragraphs that start with "if you have xyz" ... (most of which I can't answer, because I know nothing about the software) - basically way too much to learn, just to launch an application. And when that's done and the server reboots itself after a power cut in 40 days, it won't work until I restart the software manually, so I then have to remember where the config files are stored just to launch it ... etc. etc.

So I'm no closer to being able to view CCTV from the dashboard :frowning:

Here's the pic from ffmpeg. When I load the feed in VideoLAN on Windows, it shows with no problems, so I'm not sure what's going on here:


#30

Honestly, I think you do not need to change that much in the configuration, the defaults should be fine for most except that you will have to configure the url's to your ip-cams and follow the guide and create a separate config file for each of your cameras

And regarding autostart, in Linux there are many ways. I have a line in my crontab setting pointing to the correct configuration file

@reboot sleep 30 && sudo motion -nm -c /etc/motion/motion.conf

And yes, it is a lot "new" things to learn with Linux, As it was once with DOS & Windows. In addition you will need know-how about html, javascript, eventually python and bash, Linux commands and and and

I tell you, I did spend some time, sweat & tears myself and I am still at the beginning it feels


#31

Can you please post a screenshot of the distorted image in the flow editor (on the image-output node, not on the dashboard). Probably the rtsp command parameters from my test flow are not good. As I said before, it is just a test url that I found somewhere random on the web to test my developments... Did you enter exactly the same url in VLC, to get a good result? In that case I would expect the same result, but I'm not an RTSP expert...


#32

To get a more recent version you should install according to this (was in the link I posted earlier):

Installing with a release deb package

Motion can also be installed from the release deb files which may provide a more recent version than what is available via apt.Determine the deb file name that is appropriate for the distribution and platform from the Releases page and open up a terminal window and type: wget https://github.com/Motion-Project/motion/releases/{deb package name}
Next, install the retrieved deb package. Below is a sample method to install that uses the gdebi tool. sudo apt-get install gdebi-core sudo gdebi {deb package name}


#33

Seems this kind of distortion is called image smearing. The default UDP buffer size that ffmpeg uses is not large enough to hold an entire HD image frame. As a result, the received image will be smeared down from some point vertically (on some frames). If you have network issues (non-ffmpeg) related, UDP will higlight them via these distortions (as this guy with Hikvision camera's explains).

Some related articles about it (1, 2), and a document with some solutions.

Based on these articles, the problem could be solved by explicit specifying that TCP should be used (instead of UDP):

-rtsp_transport tcp -i "rtsp://184.72.239.149/vod/mp4:BigBuckBunny_175k.mov" -f image2pipe -hls_time 3 -hls_wrap 10 pipe:1

Would be nice if you could test this, even when you are going to use VLC or Motion (so other users could benefit from our discussion afterwards) ...


#34

I have just reinstalled the relevant nodes to test this. I'm still getting the same problem. Earlier you asked for a screenshot of the image in the flow, not the dashboard. I wasn't able to post this because it was too flashy, I tried for a long time to press the screenshot button at exactly the same time as the image flashed on, anyway in the end I took a video.

This video shows with the new tcp settings: https://photos.app.goo.gl/KYkjXkVZtxgsdGwx6


#35

Hey Bart,
I tried it with just a simple usb cam (as well as with your movie sample). I think the CPU load becomes too high running this on a RPi3. If I compare with Motion, same camera, same fps, the CPU load is just one third of what ffmpeg requires. Anyway, nice try.

Have a Nice Christmas


#36

Hey Walter, that is very intersting info!!!!! So you guys have two solutions that work fine:

  • VLC media player: it uses ffmpeg under the hood, but I think they use the LiveMedia library for RTSP (instead of ffmpeg). So LiveMedia 'could' indeed be faster than Fmpeg...
  • Motion project: from their code I understand that they really use Ffmpeg under the hood for RTSP. Therefore I don't understand that Motion is 3x faster compared to Ffmpeg :woozy_face: I assume you cannot see somewhere in Motion which ffmpeg command is being used???

Pffff ....

For you also a Very Nice Christmas!!!!!!!!!!!!!!!