How to use external NVR (Frigate, BlueIris, ...) with Node-RED

Yes I assume you are right, that you manually need to add camera's to the yaml file. I didn't find anything about adding/configuring camera's in their (new) user interface.

Seems there is also some kind of roadmap, but that only contains an item about removing/renaming camera's (but not adding a new one):

It is also not included in their upcoming 0.15 release. But I really have to admit that the new features in their release notes is next level stuff (for me at least). I have never been playing with NVR's, but things like lifecycle management of detected objects looks really cool.

I might become a fan of Frigate...

What a coincidence. Somebody else asked to the Frigate team the same question today. So no config management via the ui available or planned.

Having troubles to use my Coral TPU USB stick. Weird because when I was using it in the past - while experimenting with my own js tensor-based code - it worked fine.

I moved the Frigate object detection (Tensorflow) calculations from my CPU to the Google Coral TPU USB stick:

detectors:
  #cpu1:
  #  type: cpu
  #  num_threads: 3
  coral:
    type: edgetpu
    device: usb:1

But now I get this error when starting up the Frigate Docker container:

Failed to load delegate from libedgetpu.so.1.0

My Coral TPU USB stick is detected as device 004 on my Raspberry:

$ lsusb
Bus 002 Device 003: ID 04e8:61f5 Samsung Electronics Co., Ltd Portable SSD T5
Bus 002 Device 004: ID 18d1:9302 Google Inc.
Bus 002 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub
Bus 001 Device 002: ID 2109:3431 VIA Labs, Inc. Hub
Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub

And it can also be detected in Docker, although for some reason it shows no name for device 004:

$ sudo docker run --rm --privileged -v /dev/bus/usb:/dev/bus/usb busybox lsusb
Bus 002 Device 004: ID 18d1:9302
Bus 001 Device 001: ID 1d6b:0002 Linux 6.6.51+rpt-rpi-v8 xhci-hcd xHCI Host Controller
Bus 001 Device 002: ID 2109:3431 USB2.0 Hub
Bus 002 Device 003: ID 04e8:61f5 Samsung Portable SSD T5
Bus 002 Device 001: ID 1d6b:0003 Linux 6.6.51+rpt-rpi-v8 xhci-hcd xHCI Host Controller

And I pass all the USB devices to the Frigate container in my Docker compose yaml file:

services:
  frigate:
    container_name: frigate
    privileged: true
    image: ghcr.io/blakeblackshear/frigate:stable
    devices:
      - /dev/bus/usb:/dev/bus/usb # Passes all the USB devices

The privileged does even turn protection mode off (which isn't very secure), to allow the container to access all devices on the host. But even that doesn't help.

I have plugged the USB stick in and out again, but now the "Google Inc" description is replaced by something else:

$ lsusb
Bus 002 Device 003: ID 04e8:61f5 Samsung Electronics Co., Ltd Portable SSD T5
Bus 002 Device 005: ID 1a6e:089a Global Unichip Corp.
Bus 002 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub
Bus 001 Device 002: ID 2109:3431 VIA Labs, Inc. Hub
Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub

In other discussions (e.g. here) they seem to link this problem to power issues. Seems that the Coral TPU stick consumers much more current as the Raspberry Pi 4 can offer. But then again, I see all over the place that most USB hubs are not usable, because their chipset does "backpower". If I understand correctly, that would cause the Raspberry not to reboot automatically after a power down.

I have completely run out of creativity :frowning_face:
What a waste of time...
Does anybody have some tips?

Thanks!!

For what it's worth, Bart, your efforts here are NOT totally a waste of time because I want to use Frigate at some point and this topic will be my guide for setting it up. I'm sorry you're frustrated :frowning_face:

2 Likes

Have you tried it with just usb (without :1) ? At least, that's what I recall when I tested it some time ago? Power hasn't been an issue with a standard power supply here.

1 Like

@jodelkoenig,
Good catch!!! Completely overlooked that one...

Well when I removed the :1 it was quite promising. Because previously I had immediately an error:

frigate  | 2024-12-29 20:12:06.682764206  [2024-12-29 20:12:06] detector.coral                 INFO    : Starting detection process: 405
frigate  | 2024-12-29 20:12:06.702026473  [2024-12-29 20:12:06] frigate.app                    INFO    : Output process started: 407
frigate  | 2024-12-29 20:12:06.751593557  [2024-12-29 20:12:06] frigate.detectors.plugins.edgetpu_tfl INFO    : Attempting to load TPU as usb:1
frigate  | 2024-12-29 20:12:06.757227053  Process detector:coral:
frigate  | 2024-12-29 20:12:06.757244349  [2024-12-29 20:12:06] frigate.detectors.plugins.edgetpu_tfl ERROR   : No EdgeTPU was detected. 

But now it went a little bit further already:

frigate  | 2024-12-29 20:34:00.804237542  [2024-12-29 20:34:00] frigate.detectors.plugins.edgetpu_tfl INFO    : Attempting to load TPU as usb
frigate  | 2024-12-29 20:34:00.810233327  [2024-12-29 20:34:00] frigate.app                    INFO    : Output process started: 406
frigate  | 2024-12-29 20:34:00.867921088  [2024-12-29 20:34:00] frigate.app                    INFO    : Camera processor started for deurbel_voordeur: 423
frigate  | 2024-12-29 20:34:00.900868621  [2024-12-29 20:34:00] frigate.app                    INFO    : Capture process started for deurbel_voordeur: 430
frigate  | 2024-12-29 20:34:03.895203505  [INFO] Starting go2rtc healthcheck service...

And I got some hope that the problem was solved, but half a minute later it failed due to the same error:

frigate  | 2024-12-29 20:34:29.225102275  ValueError: Failed to load delegate from libedgetpu.so.1.0

So it looks like it was working fine at the start. But I might be mistaken. I am wondering if the Coral TPU USB stick has been doing object detetectings in between. And that it e.g. failed then due to a power dip.

BTW this seems to be normal (see coral documentation):

  1. At the start the device shows as "Global Unichip Corp." when the internal driver has not yet loaded.
  2. When it afterwards shows as "Google Inc." the internal driver has been loaded. E.g. after I have started the Frigate container.
1 Like

In situations like this I came to love dozzle :slight_smile: ... a very lightweight, web-based log viewer for docker container. I put it on all my maschines that run docker, convenience matters :smiley:

docker-compose.yml:

services:

  dozzle:
    container_name: dozzle
    image: amir20/dozzle:latest
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock
    ports:
      - 8080:8080
    restart: always

source: https://dozzle.dev

@jodelkoenig,
You are my absolute hero for today :star_struck:
I restarted my Frigate container again, and it nows looks to continue running.

There is a nice debug view where I can see the detected objects:

coral_tpu

Of course full of false detections, but I haven't configured anything yet. So that is normal.

What is nice to see in their metrics page, is the fact that the Coral TPU stick has a lot of resources left:

Which is again pretty normal because I am only using 3 fps for a single camera at the moment, for my object detection. But it is always nice to see metrics...

And yet more metrics. I really like those, because that makes our approach a bit more scientifical. I really had to make assumptions all the time. Instead I prefer to see metrics, so that you have some history. And indeed we can see how many objects have been detected over time, and the number of frames per second (fps):

But what really bothers me is the high CPU usage by ffmpeg. I am only capturing the stream from a single camera, and it is only a Raspberry Pi 4. But the load is huge.

However that might be related because I use the main RTSP stream from my camera, which is the highest quality (high resolution, high fps, ...). I assume I need to use the low quality substream from my cam, because otherwise everything will be highly CPU intensive:

  1. extracting images from the stream
  2. decoding those images
  3. resizing the images (because the TFLite models that run on a coral TPU stick require rectangular images with very small resolution, e.g. lots of models have been trained on 300x300 images).
2 Likes

That looks really promising. I only tested frigate for a couple of hours, while I had my hands on a coral. I shouldn't have given up here that fast.

Detection requires coral or something and ffmpeg should benefit from a graphics card I assume? I always thought that this is too much for a raspberry. Since I am on VMs with anything else, I didn't consider to follow up with this.

I am currently anyway really tempted to go for the new "Jetson Orin Nano Super Developer Kit" and test out some random AI stuff. And frigate would be the most concrete use case ... mh ...

1 Like

When you look at e.g. a Reolink video doorbell, there is quite a large quality difference between the high-quality main stream and the low-quality substream:

When I switch my detection rtsp stream to the substream:

cameras:
  doorbell_frontdoor:
    enabled: true
    ffmpeg:
      inputs:
        - path: rtsp://username:password@doorbell_ip_address:554/h264Preview_01_sub
          roles:
            - detect

Then the CPU usage of my RPI 4 drops (roughly estimated) from 98% to between 9% and 15%. Of course afterwards I also will need to capture the main stream again for recording, so that will again consume extra energy.

1 Like

Ok I gained a bit more by playing with the I-frame interval.

The video images in an RTSP stream are being compressed (H.264), i.e. only the delta between the images is being transferred (to reduce network bandwith). However every N images, a full image is being transferred (on which the delta's can be applied). Such images are called keyframes or I-frames. Which is explained on the following picture, which I copied from this article:

When we extract images from our RTSP stream, it should be less CPU consuming by extracting only those keyframes. Because they are already full original images, without extra calculations required. So most optimal the number of images per second used in object detection, should be equal to the number of keyframes per second.

In the above screenshot of my Reolink doorbell, you can see that:

  • FPS = 10 which means 10 frames per second.
  • I-frame Interval = 4x which means that there are 4 frames in between two consecutive I-frames in the video stream.

From those 2 parameters I can calculate the number of keyframes per second:

image

But my fps for detection (i.e. number of frames per second that needs to go to the Coral stick for object detection) was already 3:

cameras:
  doorbell_frontdoor:
    enabled: true
    ffmpeg:
      ...
    detect:
      enabled: true
      width: 1280
      height: 720
      fps: 3

So that is the most optimal value already. Cannot make it faster this way unfortunately...

When I set this parameter to 2, then the CPU usage is lower. And when I set it to 4 then CPU usage is higher. So it looks to me that the CPU is only dependent on the number of frames per second send to object detection, but that it does not related to the keyframe interval. That observation didn't really match my theory, until I found this issue: Frigate extracts not only the key frames from the RTSP stream, but any frame. Because not all camera brands support adjustable key frame intervals.

So I needed to tell ffmpeg myself that images should be skipped that are not keyframe images (via the -skip_frame nokey parameters):

cameras:
  doorbell_frontdoor:
    enabled: true
    ffmpeg:
      input_args: -skip_frame nokey
      inputs:
        - path: rtsp://username:password@ip_address_doorbell:554/h264Preview_01_sub

And then indeed the ffmpeg cpu usage is most of the time around 5.5% :champagne:

My time is up for today...

3 Likes

Sounds like you are back on track Bart :wink:

BTW you can get rid of the reolink logo shown on screen by turning off the watermark here

1 Like

Thanks for sharing that! Looks interesting. But personally I like to get it running on my Raspberry Pi 4, because that forces me to tweak the performance to get it working. If I start immediately on some hardware with lots of resources, I will be tempted much less to do performance tuning resulting in a lot of cpu waste in the next years. Of course without Coral TPU stick it won't be possible. Tested that already without in the past.

Yes I know, but I had planned - during my christmas holidays - to work on the migration of my ui nodes to dashboard 2. Because I got some buckets full of shit on top of my head in the last weeks, from users that think I have unlimited time to do free consultancy work for them...

My performance measurements - via top - are not good enough anymore at this point, because cpu usage of ffmpeg is now rather low. I should really install some other tooling to have an accurate history of memory/cpu usage, but no time to do all of that...

Anyway I removed the resolution from my detection config:

It "looks" like the CPU usage has now dropped to between 3.5% and 4.5%. From this explanation it seems that you need to specify the detect resolution only if your camera doesn't offer a separate low-resolution substream that can be used for detection. But since my Reolink camera's offer such a substream, I don't want any extra resizing of my images because that is only a waste of CPU. Now Frigate only needs to resize my images to the 320x320 resolution required by the Tflite model they run on the Coral TPU stick.

1 Like

From this and this article, I "think" it works like this:

Of course there is exta stuff (as you can see in the video pipeline documentation), but I have no time today to finish this diagram.

Some explanation:

  • If your camera supports a high-resolution main stream for recording, and a separate low-resolution sub stream for object detection (like in my Reolink cam) then the best way is to use that substream. Because the motion detection will consume lots less CPU, since a lot less pixels needs to be compared.
  • If your cam doesn't offer a sub stream, then specify a detect resolution (i.e. width and height) in your config yaml file, so the image can be resized. Which means it will be downscaled so that the motion detection will need to compare less pixels.
  • Frigate will crop parts (of size 320x320 in case of using the default TFLite model on a Coral TPU stick) from the (optionally downsized) image, around the areas with motion detected. It will send that 320x320 image to the Coral TPU stick for object detection. At the end Frigate will draw the bounding boxes around the detected objects in the original image.
  • If you want to detect with good accuracy small objects in your images (e.g. cats, ...) then it is better to use high-resolution images, because the 320x320 images will then contains a lot more cat related pixels. But of course then the motion detection will need to compare lots of pixels, so lots of cpu usage.
  • If Frigate detects motion in an area of size larger than 320x320 pixels in your image, then Frigate must resize the image before cropping the 320x320 image from it. To make sure the object detection gets an image containing the entire object. Of course this resizing will result in extra CPU usage. It is NOT clear to me if this happens automatically??.

Any improvements to my diagram are welcome!!

1 Like

Just a thought and I'm probably talking rubbish here :wink: -

But is there a way you could use your webhook idea to trigger the motion detection, so the camera provides the initial trigger, which may reduce the work needed in Frigate ?

I have been thinking the same (before I started with Frigate), so then it is no rubbish :yum:

However:

  • I have been chatting to Reolink support whether the firmware of my RLC-833A cams would also get the webhook feature. I got a very friendly but political answer, so I have no idea whether they will ever support it.
  • I hope that the object detection of Frigate is more accurate.
  • Frigate offers more features.

But yes indeed it could perhaps do the job. I could use Onvif events instead of webhooks, since I got that already working. But the webhooks would be a much cleaner solution, instead of some kind of Onvif pull point subscription mechanism.

So yes it could be good enough to go. Don't know. Now you are driving me nuts with your proposal...

1 Like

Reolink API guide if it's of interest to anyone ?

1 Like

I'm confused. Are people sending live RTSP streams, or encoded pngs/jpgs at some FPS over MQTT? Or is it like more a snapshot every 10 seconds or something? I feel websockets or UDP/TCP stream is better suited for video streams than MQTT.

Be careful and test this Bart - this is a known problem with the Google TPU (Coral) - essentially what happens when the HOST machine boots up the Coral has a certain USB ID, then it immediately goes through a firmware update process and once completed this changes the USB ID to a different vendor

I have had it happen a couple of times on my host system (running under Centos/Rocky Linux) - and have done the usual google searches to identify and resolve the issue

So before you go into production - try a complete power off of your docker host and then back on and see if it is all good

Craig