How to display CCTV camera in dashboard (RTSP)

Open each stream in vlc and check that they are of the same type. Frame rate, image size, codec, bit rate etc. Is suspect that you will find they are not the same and that ffmpeg is doing more work in the NR case.

Hey Colin,

If I run the test stream (with the rabbit) I see following in VLC:

image

And my Raspberry is running smooth:

But of course the resolution is very low ...

@krambriw: Would it be possible to test e.g. this public rtsp stream with Motion? Then we have a common reference to compare. In VLC you can see that this stream is much heavier compared to my original stream with the rabbit:

image

Now my Raspberry Pi 3 is getting loaded more heavily, but it is not exploding yet:

Red arrows is where the stream starts ...
I have no clue whether this is acceptable for this kind of hardware...

Got a tip from my audio/video partner @btsimonh that I could check the GPU acceleration. So I builded ffmpeg with hardware acceleration enabled, and changed the URL to force ffmpeg to use hardware decoding (in GPU) via h264_mmal:

-f rtsp -vcodec h264_mmal -i "rtsp://freja.hiof.no:1935/rtplive/_definst_/hessdalen03.stream" -f image2pipe -hls_time 3 -hls_wrap 10 pipe:1

However then I get an error about the hardware decoding:
image

The cause is (probably) that hardware MPEG2 decoding is not enabled on my Raspberry:

pi@raspberry-testboard:~ $ vcgencmd codec_enabled MPG2
MPG2=disabled

From this nice article it seems that you need to buy (from the Raspberry Pi Store) an MPEG2 license key, which is linked unique to the serial number of your Raspberry Pi. I really don't like this: have bought a Raspberry which includes a chip that I'm not allowed to use ... :rage:

Will come back here after Christmas. See you guys !!

Hi Mat,

I am a bit late responding to this thread, but I have done this some time ago. I am using avconv which is like ffmpeg in Linux (as far as I understood).
I am using a simple exec command like this:
avconv -i rtsp://user:pass@192.168.1.71:554/ch01.264 -frames 1 -qscale 1 -f image2 /home/pi/image.jpg

Works like a charm every time.

If you want to capture a video, that is also very similar:
avconv -i rtsp://user:pass@192.168.1.71:554/ch01.264 -r 10 -t 30 -y -vcodec copy -an /home/pi/grab.mp4

This generates a 30 second clip from the RTSP stream.

I did a fairly long video on the subject, explaining in detail what I did: https://www.youtube.com/watch?v=ihZWrJmbGFY&t=297s

Regards,
Csongor

1 Like

Hi Csongor(@nygma2004),

Nice that you join the discussion. All ideas and proposals are welcome...

  • Until now I considered the Exec node a bit too generic for doing specific ffmpeg stuff, so I have above only compared all existing Ffmpeg nodes. But I must admit that the code of node-red-contrib-visio-ffmpeg looks very similar to the Exec node, so I assume it is derived from it some time ago ...

    The ffmpeg node checks whether ffmpeg is installed and adds 'ffmpeg' automatically as prefix to the command, but I think that's about it. And the Exec node can already be controlled via msg.kill, by applying the different signals:

    exec_rtsp

    [{"id":"46999a1f.6657d4","type":"exec","z":"18e47039.88483","command":"ffmpeg -f rtsp -i \"rtsp://184.72.239.149/vod/mp4:BigBuckBunny_175k.mov\" -f image2pipe pipe:1","addpay":false,"append":"","useSpawn":"true","timer":"","oldrc":false,"name":"Decode RTSP stream","x":1580,"y":1120,"wires":[["47938011.f1a51"],[],[]]},{"id":"996a1c4.7436de","type":"inject","z":"18e47039.88483","name":"Start stream","topic":"","payload":"","payloadType":"date","repeat":"","crontab":"","once":false,"onceDelay":0.1,"x":1090,"y":1120,"wires":[["46999a1f.6657d4"]]},{"id":"47938011.f1a51","type":"image","z":"18e47039.88483","name":"","width":"200","x":1810,"y":1100,"wires":[]},{"id":"4cf9fc06.acfb84","type":"inject","z":"18e47039.88483","name":"Pause all streams","topic":"","payload":"SIGSTOP","payloadType":"str","repeat":"","crontab":"","once":false,"onceDelay":0.1,"x":1110,"y":1160,"wires":[["8e3a27b3.abf538"]]},{"id":"cf50c10.31f664","type":"inject","z":"18e47039.88483","name":"Resume all streams","topic":"","payload":"SIGCONT","payloadType":"str","repeat":"","crontab":"","once":false,"onceDelay":0.1,"x":1110,"y":1200,"wires":[["8e3a27b3.abf538"]]},{"id":"8e3a27b3.abf538","type":"change","z":"18e47039.88483","name":"","rules":[{"t":"set","p":"kill","pt":"msg","to":"payload","tot":"msg"}],"action":"","property":"","from":"","to":"","reg":false,"x":1330,"y":1160,"wires":[["46999a1f.6657d4"]]},{"id":"a359a2e3.a4f51","type":"inject","z":"18e47039.88483","name":"Stop all streams","topic":"","payload":"SIGTERM","payloadType":"str","repeat":"","crontab":"","once":false,"onceDelay":0.1,"x":1100,"y":1240,"wires":[["8e3a27b3.abf538"]]}]
    

    So then I can put my pull request for the Ffmpeg node into the garbage bin. So thank you for that :wink:

  • Had never heard about avconv before. Seems that avconv is started as a fork from ffmpeg in 2011, based on internal conflicts in the team. But some years later it has been replaced by ffmpeg again in Debian. Anyway I have tried both programs on the same rtsp stream, and in my test avconv was slower (20% more CPU usage):

    But perhaps my test is not good, because I should add some extra rtsp stream command arguments?

    P.S. the spikes are not related to this test, since they already exist before the red arrow (where the stream was started): they are produced by other nodes in my Node-RED flow ...

  • When I use the two streaming parameters from you youtube video, I only get 3 messages with buffers. And then the stream stops:

    image

  • And I payed Raspberry foundation for an MPEG hardware license, and now the hardware encoding is enabled:

    vcgencmd codec_enabled MPG2
    MPG2=enabled
    

    But I still get the same error in my output :worried:

Bart

1 Like

Hello Bart, so nice to be back, even if xmas not really is over yet but it is more relaxed now, relatives left etc etc. And the dishwashing machine broke down, ordered a new one, are they factory programmed with a final destiny happening when they are needed at most???

Anyway, yes, those streams do work fine also in Motion. The cpu increases very much, depending partly on the in Motion configured fps.

(I'm always running my browsers on my MacBook, never on the Pi)

Top in RPi3 shows:

100 fps (well covering what the camera stream can deliver)
CPU load varies in the range 100-300% out of 400%

5 fps (still good enough to detect movements)
CPU load 100-200%

Reducing resolution to very low did not really reduce the CPU load significantly, still around 100-200% even with 352x288. Seems to be a pretty heavy task to decode/convert rtsp streams

Even if the GPU in the RPi3 is ready for mpeg-4 decoding, I do not think Motion is utilizing it. When I view the video output (from Motion via http) in VLC, it shows that the video is converted from mpeg-4 to motion JPEG Video (MJPG). This explains the high CPU load.

1 Like

Hey Walter (@krambriw), I'm also glad you are back :wink:

Do you mean that setting 100 fps is equal to allowing the maximum fps that the rtsp source can deliver?

When I add the fabulous (:sunglasses:) node-red-contrib-msg-speed and node-red-contrib-image-info nodes to my flow:

[{"id":"66574007.74b0c","type":"exec","z":"18e47039.88483","command":"ffmpeg -f rtsp -i \"rtsp://freja.hiof.no:1935/rtplive/_definst_/hessdalen03.stream\" -f image2pipe pipe:1","addpay":false,"append":"","useSpawn":"true","timer":"","oldrc":false,"name":"Decode RTSP stream","x":1560,"y":820,"wires":[["2943cc00.7822e4"],[],[]]},{"id":"7d3aea16.d99b84","type":"inject","z":"18e47039.88483","name":"Start stream","topic":"","payload":"","payloadType":"date","repeat":"","crontab":"","once":false,"onceDelay":0.1,"x":1090,"y":820,"wires":[["66574007.74b0c"]]},{"id":"d8cdbb87.676ab8","type":"image","z":"18e47039.88483","name":"","width":"200","x":2150,"y":840,"wires":[]},{"id":"e9cc38bc.4300b8","type":"inject","z":"18e47039.88483","name":"Pause all streams","topic":"","payload":"SIGSTOP","payloadType":"str","repeat":"","crontab":"","once":false,"onceDelay":0.1,"x":1110,"y":860,"wires":[["53b625e3.eee7bc"]]},{"id":"4bdb02b2.c73efc","type":"inject","z":"18e47039.88483","name":"Resume all streams","topic":"","payload":"SIGCONT","payloadType":"str","repeat":"","crontab":"","once":false,"onceDelay":0.1,"x":1110,"y":900,"wires":[["53b625e3.eee7bc"]]},{"id":"53b625e3.eee7bc","type":"change","z":"18e47039.88483","name":"","rules":[{"t":"set","p":"kill","pt":"msg","to":"payload","tot":"msg"}],"action":"","property":"","from":"","to":"","reg":false,"x":1330,"y":860,"wires":[["66574007.74b0c"]]},{"id":"c6743afd.82d568","type":"inject","z":"18e47039.88483","name":"Stop all streams","topic":"","payload":"SIGTERM","payloadType":"str","repeat":"","crontab":"","once":false,"onceDelay":0.1,"x":1100,"y":940,"wires":[["53b625e3.eee7bc"]]},{"id":"c12393d5.0a2b2","type":"msg-speed","z":"18e47039.88483","name":"","frequency":"sec","estimation":false,"ignore":false,"x":1950,"y":820,"wires":[["4aa0ee29.c9164"],["d8cdbb87.676ab8"]]},{"id":"4aa0ee29.c9164","type":"ui_chart","z":"18e47039.88483","name":"","group":"94d2ac46.aad5d","order":0,"width":0,"height":0,"label":"fps","chartType":"line","legend":"false","xformat":"HH:mm:ss","interpolate":"linear","nodata":"fps","dot":false,"ymin":"","ymax":"","removeOlder":1,"removeOlderPoints":"","removeOlderUnit":"3600","cutout":0,"useOneColor":false,"colors":["#1f77b4","#aec7e8","#ff7f0e","#2ca02c","#98df8a","#d62728","#ff9896","#9467bd","#c5b0d5"],"useOldStyle":false,"x":2130,"y":800,"wires":[[],[]]},{"id":"2943cc00.7822e4","type":"image-info","z":"18e47039.88483","name":"","x":1770,"y":820,"wires":[["c12393d5.0a2b2"]]},{"id":"94d2ac46.aad5d","type":"ui_group","z":"","name":"Linear gauge widget test","tab":"76c22f9b.11403","disp":true,"width":"6","collapse":false},{"id":"76c22f9b.11403","type":"ui_tab","z":"","name":"Custom UI","icon":"dashboard"}]

Then I can see that the rtsp stream gives in my test case:

  • Jpeg images of 1280x720 resolution
  • Frame rate is <= 16 fps

It isn't really clear to me anymore whether Motion is faster (compared to Ffmpeg directly), or it is just a difference in resolution/fps/... ? But I have learned that the Exec-node does the job pretty well.

Yes, that is the number of frames when Motion limits it's processing, "Maximum number of frames to be captured per second". If the camera stream delivers 60 fps, it would not be Motion that limits the processing. However, when I run this on the RPi3 I think I can see that there are other limitations that prevents the processing of 60 fps, could b the RPi itself, could be my internet network connection bandwidth. I only see that there are many frames captured during a second since the millisecond counter in the picture is updated very frequently during the same second

Anyway, I will try your flow with the exec node, looks nice

To take this further, what you really would like to do is to classify objects and you would most likely not push every single frame to be analyzed, it would overload the analysor. It is here Motion comes in and shines and makes it interesting, frames are only sent further when motion is detected. So if you want to build a complete video analyze functionality in NR, you need to think about how to filter out those frames that should be further analyzed

1 Like

Damn, I cannot get the hardware acceleration working on my Raspberry Pi 3.

  • Have bougth a license key and installed it and checked whether it is activated (see how to here). That is all OK.
  • I have build Ffmpeg with hardware accelaration enabled (see how here).
  • I have an extra '-vodec h264_mmal' parameter in my ffmpeg command to tell him that the mpeg (that I receive via an rtsp stream) should be decoded in hardware:
 ffmpeg -vcodec h264_mmal -rtsp_transport tcp -i "rtsp://freja.hiof.no:1935/rtplive/_definst_/hessdalen03.stream" -f image2pipe pipe:1
  • At the start you will see that I use the correct ffmpeg (with hardware acceleration enabled):

    ffmpeg version N-92794-g7d5bb3a Copyright (c) 2000-2018 the FFmpeg developers
    built with gcc 4.9.2 (Raspbian 4.9.2-10)
    configuration: --arch=armel --target-os=linux --enable-gpl --enable-omx --enable-omx-rpi --enable-nonfree --enable-mmal
    libavutil 56. 25.100 / 56. 25.100
    libavcodec 58. 42.104 / 58. 42.104
    libavformat 58. 25.100 / 58. 25.100
    libavdevice 58. 6.101 / 58. 6.101
    libavfilter 7. 46.101 / 7. 46.101
    libswscale 5. 4.100 / 5. 4.100
    libswresample 3. 4.100 / 3. 4.100
    libpostproc 55. 4.100 / 55. 4.100
    Input #0, rtsp, from 'rtsp://freja.hiof.no:1935/rtplive/definst/hessdalen03.stream':
    Metadata:
    title : hessdalen03.stream
    Duration: N/A, start: 35792.473322, bitrate: N/A
    Stream #0:0: Video: h264 (High), yuv420p(progressive), 1280x720 [SAR 1:1 DAR 16:9], 59.94 fps, 59.94 tbr, 90k tbn, 119.88 tbc
    Stream mapping:
    Stream #0:0 -> #0:0 (h264 (h264_mmal) -> mjpeg (native))
    Press [q] to stop, [?] for help

  • But then I get an error:

    [h264_mmal @ 0x20db2b0] Did not get output frame from MMAL.
    Error while decoding stream #0:0: Unknown error occurred
    [h264_mmal @ 0x20db2b0] MMAL error 2 on control port

Have been searching across the globe, but cannot find a solution. Perhaps this forum is not the best place to ask this question, but hopefully somebody with ffmpeg knowledge is reading ...

Perhaps I misinterpret the logging, but it seems to me (h264 (h264_mmal) -> mjpeg (native)) that he converts h264 to mjpeg? But I need at the end (image2pipe) a jpeg image to be piped (via stdout stream) to the Exec node. Could that be a problem i.e. that he cannot convert the mjpeg to jpeg somehow?

I don't know enough about ffmpeg to solve this on my own :woozy_face:

Try it reading from an mp4 file rather than the stream to see if it is a problem with the stream or a more fundamental issue.

Hey Colin (@Colin),

Thanks for the tip. Again a 'bit' closer to the thruth ...

I have downloaded two Big Buck Bunny videos from there download site:

image

  • The .mov file gives me the same error as with the RTSP stream, although it is h264 (as stated in the file name). So I assume the hardware decoder cannot handle my RTSP stream for a similar reason. But I still don't know why this h264 doesn't work ...

  • The .mp4 file works fine. I get a nice speed of average 115 frames per second in two separate tests: one test with hardware acceleration (i.e. parameter ' -vodec h264_mmal ' added) and one without hardware acceleration (without that parameter). But I don't really see a difference in overall CPU performance between both tests:

    Perhaps Ffmpeg always uses hardware acceleration by default. Or it does perhaps never use the hardware decoder, although I'm not sure whether my Raspberry can decode 115 fps (resolution 320x180) via software. Will have to do the same performance test on another pi (on which I haven't paid for hardware acceleration), to find this out.

As far as i know you do not need to buy the mpeg2 license as long as you do not use mpeg2 videos. mpeg4 - which is h264 - should work out of the box.
As i was playing around with videos on a raspberry i came to the conclusion that vlc does not use hardware acceleration but omxplayer does. It uses the OpenMAX library.
Your ffmpeg compiling options "--enable-omx" may also point to OpenMAX.
But here my knowledge ends :frowning:

2 Likes

Hi @Jaxn, I have tested the same flow on a Raspberry where I haven't a license key and the performance is equal. So let's summarize:

  • Like you say the license key is only required for MPEG2. I shouldn't have seen that when I ordered it from their site...

  • For playing h264 via hardware acceleration, ffmpeg needs to build with appropriate parameters.

  • All tutorials (like e.g. this one) say you need to specify -vodec h264_mmal to instruct ffmpeg that it should use hardware acceleration. That is also explained in the official docs for external decoders. However I don't see any difference in performance when I specify that parameter. And without that parameter I can also play the .mov file without problems ...

  • And then there is the "-hwaccel" parameter for internal decoders, but I don't get that one working for some reason:

    ffmpeg -hwaccel -i "/home/pi/.node-red/BigBuckBunny_320x180.mp4" -f image2pipe pipe:1
    

    Results in "Option hwaccel (use HW accelerated decoding) cannot be applied to output url /home/pi/.node-red/BigBuckBunny_320x180.mp4 -- you are trying to apply an input option to an output file" ...

So I don't know now if it is playing via hardware or software.

P.S. For those using VLC: via Tools-->Messages you can set the verbose level to 'debug' and then you can see if underneath avcodec (i.e. ffmpeg) uses hardware acceleration. Example on my portable:

image

Replying to an old post in this thread, but I think its relevant.

What rtsp sources are you capturing from?

With opencv rtsp stream capture (cv 3.3.0 and 3.4.2) I'm finding issues with many rtsp sources, no difference between the cv versions (on three different Linux systems).

errors/warnings like:

Invalid UE golomb code
[h264 @ 0x30f6460] mb_type 43 in P slice too large at 37 68
[h264 @ 0x30f6460] error while decoding MB 37 68

Running your command on one of my Onvif cameras on my i7 Ubuntu 16.04 desktop:

avconv -i rtsp://admin:admin@192.168.2.221/media/video1 -frames 1 -qscale 1 -f image2 /home/wally/image.jpg

ffmpeg version 2.8.15-0ubuntu0.16.04.1 Copyright (c) 2000-2018 the FFmpeg developers
built with gcc 5.4.0 (Ubuntu 5.4.0-6ubuntu1~16.04.10) 20160609
configuration: --prefix=/usr --extra-version=0ubuntu0.16.04.1 --build-suffix=-ffmpeg --toolchain=hardened --libdir=/usr/lib/x86_64-linux-gnu --incdir=/usr/include/x86_64-linux-gnu --cc=cc --cxx=g++ --enable-gpl --enable-shared --disable-stripping --disable-decoder=libopenjpeg --disable-decoder=libschroedinger --enable-avresample --enable-avisynth --enable-gnutls --enable-ladspa --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libcdio --enable-libflite --enable-libfontconfig --enable-libfreetype --enable-libfribidi --enable-libgme --enable-libgsm --enable-libmodplug --enable-libmp3lame --enable-libopenjpeg --enable-libopus --enable-libpulse --enable-librtmp --enable-libschroedinger --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libssh --enable-libtheora --enable-libtwolame --enable-libvorbis --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx265 --enable-libxvid --enable-libzvbi --enable-openal --enable-opengl --enable-x11grab --enable-libdc1394 --enable-libiec61883 --enable-libzmq --enable-frei0r --enable-libx264 --enable-libopencv
WARNING: library configuration mismatch
avcodec configuration: --prefix=/usr --extra-version=0ubuntu0.16.04.1 --build-suffix=-ffmpeg --toolchain=hardened --libdir=/usr/lib/x86_64-linux-gnu --incdir=/usr/include/x86_64-linux-gnu --cc=cc --cxx=g++ --enable-gpl --enable-shared --disable-stripping --disable-decoder=libopenjpeg --disable-decoder=libschroedinger --enable-avresample --enable-avisynth --enable-gnutls --enable-ladspa --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libcdio --enable-libflite --enable-libfontconfig --enable-libfreetype --enable-libfribidi --enable-libgme --enable-libgsm --enable-libmodplug --enable-libmp3lame --enable-libopenjpeg --enable-libopus --enable-libpulse --enable-librtmp --enable-libschroedinger --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libssh --enable-libtheora --enable-libtwolame --enable-libvorbis --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx265 --enable-libxvid --enable-libzvbi --enable-openal --enable-opengl --enable-x11grab --enable-libdc1394 --enable-libiec61883 --enable-libzmq --enable-frei0r --enable-libx264 --enable-libopencv --enable-version3 --disable-doc --disable-programs --disable-avdevice --disable-avfilter --disable-avformat --disable-avresample --disable-postproc --disable-swscale --enable-libopencore_amrnb --enable-libopencore_amrwb --enable-libvo_aacenc --enable-libvo_amrwbenc
libavutil 54. 31.100 / 54. 31.100
libavcodec 56. 60.100 / 56. 60.100
libavformat 56. 40.101 / 56. 40.101
libavdevice 56. 4.100 / 56. 4.100
libavfilter 5. 40.101 / 5. 40.101
libavresample 2. 1. 0 / 2. 1. 0
libswscale 3. 1.101 / 3. 1.101
libswresample 1. 2.101 / 1. 2.101
libpostproc 53. 3.100 / 53. 3.100
Invalid UE golomb code
Last message repeated 1 times
Input #0, rtsp, from 'rtsp://admin:admin@192.168.2.221/media/video1':
Metadata:
title : VCP IPC Realtime stream
Duration: N/A, start: 0.000000, bitrate: N/A
Stream #0:0: Video: h264 (High), yuv420p, 1280x720, 30 fps, 30 tbr, 90k tbn, 60 tbc
Stream #0:1: Data: none
Please use -q:a or -q:v, -qscale is ambiguous
[swscaler @ 0x17c7980] deprecated pixel format used, make sure you did set range correctly
Output #0, image2, to '/home/wally/image.jpg':
Metadata:
title : VCP IPC Realtime stream
encoder : Lavf56.40.101
Stream #0:0: Video: mjpeg, yuvj420p(pc), 1280x720, q=2-31, 200 kb/s, 30 fps, 30 tbn, 30 tbc
Metadata:
encoder : Lavc56.60.100 mjpeg
Stream mapping:
Stream #0:0 -> #0:0 (h264 (native) -> mjpeg (native))
Press [q] to stop, [?] for help
Invalid UE golomb code
frame= 1 fps=0.0 q=1.0 Lsize=N/A time=00:00:00.03 bitrate=N/A dup=1 drop=1
video:237kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: unknown

But it did capture an image.

To capture "snapshots" for my python AI I'm finding directly grabbing snapshots from the Onvif "snapshot http URL" works best for Onvif cameras.

For my Lorex DVR I'm finding pulling frames with openCV from rtsp streams works great, although the latency is several seconds. A net DVR I tried (Geovision SNVR0811) grabbing from its rtsp URL (rtsp://admin:admin@192.168.2.222:554/ch1) worked great until it detected motion then it locked up my code with one of those h264 errors. Using the camera directly (rtsp://admin:admin@192.168.2.221/media/video1) doesn't lock up my code, but the latency is horrible and some images have mosiac breakups especially with a lot of motion in the scene.

Right now I've three "front ends for my AI) ftp snapshots from a camera or DVR, image grabs from rtsp streams with openCV and http access from the Onvif snapshot URL (http://192.168.2.221:85/images/snapshot.jpg)

Which method works "best" seems to depend on the camera source and trying seems the only way to know :frowning:

I'm going through this thread again as I'm getting serious about getting some kind of web interface for my AI. The dashboard is working fine to mode setting etc. monitoring the cameras for setup (priority) or viewing (mostly useful for demos)

Playing your public rtsp stream in my opencv code I get ~25 fps. but is there a link with a more reasonable resolution like D1?

When I imported your flow into node-red it worked fine after I installed the image-output node. Is there some way to get the image-output node to display on a different page or is it confined to displaying in the node-red editor?

Pasting in the URL for one of my Lorex DVR channels it showed the same "image caching" issue I mentioned in this thread: dashboard caching

It seems to choke on the HD image although I can see flashes with smooth movement as a car happened to be driving by. This may be that UDP buffer size issue that has been mentioned earlier. I need to investigate this image-output node more thoroughly.

Changing the rtsp URL to use a substream from my Lorex DVR it plays fine but still is "blocky" compared to when viewed in my openCV test code. I think its like 352x240. The Lorex streams this to my phone fine with their "app" but its resolution is not good enough to focus the cameras.

Nope, it is purely designed to display in the Node-RED flow editor. It just adds some svg content to the svg of the flow editor.

I'm going to create something similar for the dashboard, to have some PTZ controls on top of the camera view. I started to create a repository already last weekend, but it contains nothing useful yet!!!! Just need some free time to spend on it :woozy_face:

Correct me if I'm mistaken but it seems to me that this command writes all images to a temporary file?? If so, I would suggest to use the stdout stream (via image2pipe) to keep everything in memory only.

If you look at the code of the image-output node, you will see that it uses RED.comms.publish to push the image via a websocket channel to the flow editor. I have very bad experiences with pushing lots of data through websockets. Never had a look at that UDP buffer size, but if you kind find a solution, please be my guest !!!

I try to avoid pushing image streams over websockets, since that freezes my user interface entirely. I have described this here, and in the section after that you can see how I avoid websockets by using my node-red-contrib-multipart-stream-encoder node ...

Is there any reason why you are using OpenCv to stream RTSP? From this explanation, I understand that OpenCv uses FFmpeg underneath to accomplish that. I would expect it to be faster to call FFmpeg directly from Node-RED, but don't hesitate to correct me if I'm wrong!

The reason here is simple. Its done in the Python code that sends the images to the AI which uses the Movidius NCS python API so using opencv-python features are a natural. For the ftp front end, node-red implements the server and passes the file as a buffer to the Python code via MQTT. For the http jpeg snapshots I use the Python requests module.

I'm willing to bet there is not a "simpler" way to view multiple rtsp streams that this openCV Python snippet:

windowName = list() # flags removes toolbar and status bar from window
for i in range(Nrtsp):
windowName.append('rtsp_' + str(i))
cv2.namedWindow(windowName[i], flags=cv2.WINDOW_GUI_NORMAL + cv2.WINDOW_AUTOSIZE) # this will fail for CV without QT for highgui

Rcap = list()
for i in range(Nrtsp):
Rcap.append(cv2.VideoCapture(rtspURL[i]))
Rcap[i].set(cv2.CAP_PROP_BUFFERSIZE, 2) # doesn't throw error or warning in python3, but not sure it is actually honored

while(1):
try:
for i in range(Nrtsp):
ret, frame = Rcap[i].read()
##cv2.imshow(windowName[i],cv2.resize(frame, (640, 360)))
cv2.imshow(windowName[i], frame)
fps.update() # update the FPS counter
key = cv2.waitKey(1) & 0xFF
if key == ord("q"): # if the q key was pressed, break from the loop
break
except KeyboardInterrupt:
break
except Exception as e:
print('EXCEPTION! ' + str(e))
break

It handles 15 1920x1080 rtsp streams resized to 640x360 for live viewing on my i7 Desktop while I'm doing everything normal -- email about 30 browser tabs open on six virtual desktops, Email pandora, etc.

As to the avconv command I just copied it from the much earlier post in this thread and changed the URL to be one of my 720p Onvif cameras to illustrate there be deamons in rtsp-land. No problems on one system with one (brand) of camera is a long way from being a general solution.

VLC is the closest we have to a general solution -- its about the only "3rd party" rtsp viewers most of the network DVR vendors will offer any support for, but most use some lame unsigned activeX control limiting you to using Internet Explore for there web viewers. I'd love to find a way to use the "guts" of VLC, but last time I looked they made ffmpeg look like the king of clear documentation!

The speed looks impressive ...

Its even more so when you see it in person with all the other activity on the system. Its a second from the top of the line i7 computer purchased when I retired in July 2013, maxed out with 64GB RAM along with the Lorex DVR system.

I naively expected to get "motion images" from the DVR, "AND" them with PIR motion sensor outputs to reduce the false alarm rate. Not even close, so I started looking into openCV face-detection stuff Harr Cascades, HOG, etc. all failed badly on security camera images and found false faces in the shadows and bushes.

Then I discovered AI, Darknet YOLO was great but took 18 seconds to process a frame on this i7 (I don't have CUDA installed which it highly recommends). Eventually this led me to the Movidius NCS and Moblenet-SSD AI whcih has worked very well.