Display camera ip in dashboard and store video in raspberry

I forgot to ask, or maybe i overlooked it, but what platform are you on? I have been experimenting with a raspberry pi 4 model b rev 1.1 4gb to see if it can handle my cctv system, and so far the results are very good. The tricky part was to take advantage of hardware decoding. I have a gist that you can look at.

If your issue is cpu load, based on the posted gif, then hardware decoding should help, since the jpeg is being created from h264 video. But of course, that is all platform dependent.

1 Like

(Unfortunately) I'm running Node Red on a Raspberry Pi 3B with only 1GB of RAM.

I don't think this is due to a lack of cpu, (I noted 36% of cpu in 1080p and 15% in 640x360),because when I analyze the images :

The signature of starting a jpeg is FF D8 FF ... and end FF D9.
The first image of a corrupted stream, generated by the flow decoder, starts well with FF D8 FF but does not end with FF D9
It would seem that it goes up to the maximum size of the buffer which would be 65536 bytes.
image
Not having found FF D9: we end up with an "Unknown image format".


This is a corrupt picture with a gray headband = 65536 bytes


This is a good picture 65387 bytes (< 65536 bytes)

That is easy enough to solve. I ran into that problem some years back. It was worse on mac, whose pipe buffer size is 8192, while linux is 65536 or ~32000, and windows being somewhere near ~90000, if i remember correctly. The claim by mac was that a smaller piping size was more efficient, but I found it to be a huge pain when dealing with piping data in to nodejs from external processes.

The data containing jpegs coming out of ffmpeg will have to first be piped into some parser that can hold the data until the entire jpeg is pushed in. At this point, the jpeg buffer can be larger than the system piping limit and there will be no limit to pass it to further processing within nodejs.

In the gist, I show the usage of pipe2jpeg, which will do what you need. You can get the complete jpeg by listening to it's "jpeg" event.

1 Like

I have just modified gpu_mem = 512 in /boot/config.txt but obviously this is not enough to display full HD correctly with this commande : ffmpeg -rtsp_transport tcp -i "rtsp://192.168.1.7:554/user=xxxxxxx_password=xxxxxx_channel=0_stream=1.sdp?real_stream" -filter:v fps=fps=5 -f image2pipe pipe:1.

I tried to change:

But without succes too ...

My turn to be your student, I sent you a PM to know if you are progressing with the functioning of Node Red and try to understand how to integrate
raspberry_pi_4_model_b_rev_1_1_4gb_ffmpeg.md
into Node Red, for beginners like me, it could be useful for many of us. (if someone else found the solution, I'm a taker :wink:)

Sorry, the gpu_mem=512 increase was only needed for gpu hardware decoding using h264_mmal codec for rtsp ip cam video. The default value for me was only allowing 2 of 14 ffmpeg commands to succeed without running out of gpu memory.

The scaling filter -vf scale=iw*0.5:-1 (input width x 1/2 and height auto scaled) would be an alternative to solving the jpeg tearing situation, by decreasing the dimensions of the jpeg. Also, you could make the quality worse -q 31 to get a smaller jpeg that will be small enough to be piped in a single piece.

But, you probably want to keep the quality good and the dimensions big, so for that we have to buffer the pieces of jpeg output from ffmpeg and put them back together before sending it to your gui.

I responded to the PM. luckily node-red came preinstalled on my pi, so I am ready for my private tutorial.

I am almost ready to publish a node-red-contrib that may solve the broken jpeg trouble. Notice the jpeg chunks rate vs whole jpegs rate and the image info size. In the real world, I would never output jpegs with a size and fps that big. Just for illustration purposes.

1 Like
npm install node-red-contrib-pipe2jpeg
4 Likes

Wow I'm impressed with your speed. :star_struck:
Thank you for this contribution that I will test quickly. :+1:

@kevinGodell you are my (our) hero :star_struck:. it is the first time that I manage to visualize the video of one of my cameras in 1080p without hashing of images.

Only 2 warn during npm install :
npm WARN @jimp/plugin-threshold@0.14.0 requires a peer of @jimp/plugin-color@>=0.8.0 but none is installed. You must install peer dependencies yourself.
And
npm WARN @jimp/plugin-threshold@0.14.0 requires a peer of @jimp/plugin-resize@>=0.8.0 but none is installed. You must install peer dependencies yourself.

With @BartButenaers command ffmpeg -rtsp_transport tcp -i "rtsp://192.168.1.7:554/user=xxxxxxx_password=xxxxxx_channel=0_stream=0.sdp?real_stream" -filter:v fps=fps=5 -f image2pipe pipe:1 and your node pipe2jpeg
, it works like clockwork.

Besides, if you see an improvement of this command line, don't hesitate to let us know, now that you know how to use Node Red :wink:

1 Like

Hey Kevin,
It is VERY kind of you helping the guys in this community, that are trying to implement video surveillance in Node-RED!!! Had never expected a custom node to be honest, and certainly not that fast...
I'm in the middle of another node development, but I will join this discussion as soon as I have free time again...

1 Like

I am not sure about those npm warnings, as my node does not use them as a dependency.

For the ffmpeg command line, I have much to say, but it depends on your situation.

If you only have a single cam or a very robust system, then you probably don't have to worry about resource usage such as cpu and ram. For me, I have 14 rtsp cams that I plan on running on a pi 4. I have already done this without node-red, but now I would like to tinker and publish some more advanced nodes for video playback, motion detection, etc, to make a fairly complete cctv system running on node-red. So, for me, I need to be efficient in my system.

If your system supports hwaccel (ffmpeg -hwaccels), I would take advantage of that for GPU decoding of mp4 (rtsp) so that the CPU can encode the jpeg. For me, that cut my cpu load by a measurable amount for a single ffmpeg command. If you don't need the full size image or high fps, then I would decrease it by changing the vf to -vf fps=fps=2,scale=iw/2:-1 (input width divided in half, height auto) or just use the sub stream of the ip cam as the input and have the source set at a smaller size and framerate.

I understand that we are trying to simulate video using jpegs. A lot of encoding and large file sizes and high fps is not practical for a pi. A better solution would be to stream copy the h264 video from rtsp to mp4 and play it in a mediasource extension video player (without writing files to the disk).

2 Likes

Do you have an old version of node-red-contrib-image-tools installed?

If so, update it. The dependencies for jimp were updated to v0.16.0 some time ago.

Or perhaps another node you have installed uses jimp? (I think image-viewer uses a clone of jimp (not the main jimp repo?)

Ps, it's not gonna prevent things from working.

we will leave aside then since it works like that.

that's exactly what I would like (and probably many more). Here is an example of my dashboard:


it is a global view of all the sensors in the house (water heater, swimming pool, etc.)

On the left we see that a person has been detected at the portal.object detection project here
And on the right the streaming of the portal camera. (the cameras are under tabs that you can unfold and automatically launch the rtsp stream).

Fortunately, I chose the camera stream = 1 to reduce the processor load. For example if I run 4 cameras at the same time I have 25% cpu load (RPI 3b)

If I click on the camera, I switch to stream = 0 (1080p) on a full page (all other streams are stopped), the cpu is at 40%

Not bad : Makes me go from 40% to 25% cpu while losing a bit in quality (960x540px).

I failed to get it to work by adding it in my current command line:

  • the pipe2jpeg node announces : "pipe2jpeg: input must be a buffer"
  • if I remove pipe2jpeg : "unrecognized image format"

Are you thinking of doing it? It would be a terrible step forward.

Sorry, the command ffmpeg -hwaccels was just for you to see what is available. That should not be added to your exec command. Run it on a command line and ffmpeg will show what is available.

image
Nothing to upgrade inside the Palette manager

Probably one of this one ?

I agree, I will not venture to look any further.

Oh yes !
This it what the RPI 3B can do :

ffmpeg version 3.2.15-0+deb9u1 Copyright (c) 2000-2020 the FFmpeg developers
  built with gcc 6.3.0 (Raspbian 6.3.0-18+rpi1+deb9u1) 20170516
  configuration: --prefix=/usr --extra-version=0+deb9u1 --toolchain=hardened --libdir=/usr/lib/arm-linux-gnueabihf --incdir=/usr/include/arm-linux-gnueabihf --enable-gpl --disable-stripping --enable-avresample --enable-avisynth --enable-gnutls --enable-ladspa --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libcdio --enable-libebur128 --enable-libflite --enable-libfontconfig --enable-libfreetype --enable-libfribidi --enable-libgme --enable-libgsm --enable-libmp3lame --enable-libopenjpeg --enable-libopenmpt --enable-libopus --enable-libpulse --enable-librubberband --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libssh --enable-libtheora --enable-libtwolame --enable-libvorbis --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx265 --enable-libxvid --enable-libzmq --enable-libzvbi --enable-omx --enable-openal --enable-opengl --enable-sdl2 --enable-libdc1394 --enable-libiec61883 --enable-chromaprint --enable-frei0r --enable-libopencv --enable-libx264 --enable-shared
  libavutil      55. 34.101 / 55. 34.101
  libavcodec     57. 64.101 / 57. 64.101
  libavformat    57. 56.101 / 57. 56.101
  libavdevice    57.  1.100 / 57.  1.100
  libavfilter     6. 65.100 /  6. 65.100
  libavresample   3.  1.  0 /  3.  1.  0
  libswscale      4.  2.100 /  4.  2.100
  libswresample   2.  3.100 /  2.  3.100
  libpostproc    54.  1.100 / 54.  1.100
Hardware acceleration methods:
vdpau
vaapi

on my pi4 (after being updated)

pi@raspberrypi:~ $ ffmpeg -version
ffmpeg version 4.1.6-1~deb10u1+rpt1 Copyright (c) 2000-2020 the FFmpeg developers
built with gcc 8 (Raspbian 8.3.0-6+rpi1)
configuration: --prefix=/usr --extra-version='1~deb10u1+rpt1' --toolchain=hardened --incdir=/usr/include/arm-linux-gnueabihf --enable-gpl --disable-stripping --enable-avresample --disable-filter=resample --enable-avisynth --enable-gnutls --enable-ladspa --enable-libaom --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libcdio --enable-libcodec2 --enable-libflite --enable-libfontconfig --enable-libfreetype --enable-libfribidi --enable-libgme --enable-libgsm --enable-libjack --enable-libmp3lame --enable-libmysofa --enable-libopenjpeg --enable-libopenmpt --enable-libopus --enable-libpulse --enable-librsvg --enable-librubberband --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libssh --enable-libtheora --enable-libtwolame --enable-libvidstab --enable-libvorbis --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx265 --enable-libxml2 --enable-libxvid --enable-libzmq --enable-libzvbi --enable-lv2 --enable-omx --enable-openal --enable-opengl --enable-sdl2 --enable-omx-rpi --enable-mmal --enable-neon --enable-rpi --enable-libdc1394 --enable-libdrm --enable-libiec61883 --enable-chromaprint --enable-frei0r --enable-libx264 --enable-shared --libdir=/usr/lib/arm-linux-gnueabihf --cpu=arm1176jzf-s --arch=arm
libavutil      56. 22.100 / 56. 22.100
libavcodec     58. 35.100 / 58. 35.100
libavformat    58. 20.100 / 58. 20.100
libavdevice    58.  5.100 / 58.  5.100
libavfilter     7. 40.101 /  7. 40.101
libavresample   4.  0.  0 /  4.  0.  0
libswscale      5.  3.100 /  5.  3.100
libswresample   3.  3.100 /  3.  3.100
libpostproc    55.  3.100 / 55.  3.100
pi@raspberrypi:~ $ ffmpeg -hide_banner -hwaccels
Hardware acceleration methods:
vdpau
vaapi
drm
rpi
rpi
pi@raspberrypi:~ $ ffmpeg -hide_banner -decoders|grep h264
 VFS..D h264                 H.264 / AVC / MPEG-4 AVC / MPEG-4 part 10
 V..... h264_v4l2m2m         V4L2 mem2mem H.264 decoder wrapper (codec h264)
 V..... h264_mmal            h264 (mmal) (codec h264)

I wonder if your pi needs an update or simply doesn't support the features. If I remember correctly, I think my hardware accell did not work until i updated.

Again on my RPI3B , ffmpeg version 3.2.12

pi@pi:~ $ ffmpeg -hide_banner -decoders|grep h264
 VFS..D h264                 H.264 / AVC / MPEG-4 AVC / MPEG-4 part 10
 V....D h264_vdpau           H.264 / AVC / MPEG-4 AVC / MPEG-4 part 10 (VDPAU acceleration) (codec h264)

I don't have all the decoders like you ... do you know the command line to update ffmpeg ?

Hi, the warnings are possibly coming from the green "image" node as it uses a package "jimp-compact" rather than the actual jimp package itself. The jimp-compact package was only very recently updated for the first time in 1 year (so the dependency on your installation might be outdated).



*Click me* to read about "duplicate functionality" by having these all image nodes installed (if interested or bored)

Essentially, the pink viewer node in "image tools" and the green image node are almost identical (except for the underlying jimp package)...

  • The pink image-tools "Viewer" node uses the original jimp package (currently up-to-date at v0.16.0)
  • The green "image" node (node-red-contrib-image-output) uses something called jimp-compact which until 17 days ago was 4 versions / 1 year out of date.
    • As i remember it, jimp-compact was chosen by the devs as it has a smaller footprint than its source project "jimp" - however I chose to keep "image-tools" on the original "jimp" package as i feared this kinda thing would happen with jimp-compact.

So, if you were to uninstall / re-install node-red-contrib-image-output you should get the latest jimp-compact dependency installed. Perhaps npm-outdated would even be enough (not certain)

de-duplication 1...

If you use image-tools for its image processing capabilities & wish to keep it installed, then there is really no need to also install node-red-contrib-image-output as the node-red-contrib-image-tools comes with its own viewer.

Conversely, if you are not using any of the image processing capabilities image-tools provides, you can just uninstall that node & use the green output "image" node from node-red-contrib-image-output.

de-duplication 2...

Regarding node-red-contrib-image-info - again, if you are keeping the image-tools nodes, node-red-contrib-image-info is not really needed since the "image-tools" image node reports image size (and a whole lot more). Example...

1 Like

Is this a recommended way doing it?