How to use external NVR (Frigate, BlueIris, ...) with Node-RED

Each time you make a change take the whole config and feed it into ChatGPT (or Gemini - your choice) and ask it to format it as correct YAML - i have had zero issues since i started doing this about 12 months ago

Craig

I get so few false positives that I just manually copy the full frame images to my collection of false positives. My false negative rate is pretty high but as long as the frame rate is decent an extra second of latency is not a big deal, as miss this frame get 'em the next. I'm up to twenty false positives now. My neighbor's black and white cat detects with very high (but bogus) confidence as a person when rolling around in my driveway. It is only the 720P camera that miss detects, the 4K camera doesn't see it as a person. I can't explain it. My basic procedure is run MobilenetSSD_v2 on the full frame image if a person is found wtih > 70% confidence, I do a crop (digital zoom) about the detection and redetect with SSD_v2 with confidence of 80%. It that detects, the zoomed image is passed to the yolo8 detector with a confidence of 75%. Making the yolo8 threshold be 80% would get rid of about a third of the false positives. But I haven't done that yet. I'm looking at implementing yolo11 in the next couple of weeks. Had some "home owner" issues that have robbed me of my free time for the past three weeks or so.

1 Like

I do this by integrating with our home alarm system. It has three modes of operation;

  • armed (fully armed)
  • armedhome (outer shell armed)
  • unarmed
    When I change the mode of operation, like arming/disarming the system, a message is sent via MQTT to my home automation system that turns recording on/off. Very simple, very automatic

Is really good to have, yes. That is why "I did it my way", compiling my own recordings, inserting those images w boxes into the recordings while they happen. Each such image frame is then added five times, gives a good view of the event when playing & reviewing the recording

1 Like

iirc I saw a post while trying to 'sort things out' which suggested that the Bounding Boxes are only in the Snapshots and there were no plans to put them in any videos. The reason for this is that the score relates to 'the highest inference score' for that object.

At the moment I have three Cameras, Doorbell, Duo 2 and a Trackmix (2 cameras) with the second 'camera' tracking an object within the first's frame. This all seems to be working well with my current Frigate configuration and the MQTT is fine.

The problem I am having and getting confused about is changing access to the recordings from the SSD to a USB drive on a Pi5. I can see and read/write to the drive OK, but Docker does not want to accept the USB drive when I change docker config line from
- ./storage:/media/frigate
- /media/frigate/storage:/media/frigate
I have Portainer installed and have done several recreates and I have also tried to bind the drive to Docker, all to no avail (evidently there is a difference Mounting a drive on the system and Binding it into Docker! - possibly!!).

Today I am going to delete the Frigate container and start from scratch with the new Docker config.

(I also found out on my last rebuild that Coral TPU device does not have to be installed as a separate item, it seems to be included with Frigate.)

p.s. I have a config.yml file here if anyone wants it.

EDIT: I first tried a recreate via CLI and it is now saving to the correct destination (USB HDD).

Walter,
there must have been some paranormal activity :ghost: :alien: across long distance between your brain and mine, because I had planned last evening to implement 100% exactly the same 3 scenario's and setup. That seems indeed a very decent and automatic setup to me.

Next level stuff :exploding_head:
You should consider to discuss your solution with the Frigate guys :yum:

Perhaps you should explain a bit more in detail what your problem is. Hopefully somebody can join and assist you.

1 Like

Thank you Bart,

I managed to get it working by doing a CLI recreate instead of using Portainer. In future I will only be using Portainer to view the Status of Frigate only.

1 Like

I also have a question:

Here in Belgium (like in some other European countries) we are only allowed to watch via our cams our own property and not e.g. part of the property of our neighbours (or public properties) for privacy reasons. Which is imho a very decent law. But that means that I need some kind of privacy mask. Which means that the parts of my camera recordings should be e.g. colored black. Because when somebody complains, the police will pass by to look at the recordings (like they did recently with one of my colleguas at work). So I really need to implement this before I can install extra cameras...

I "think" Frigate does not offer such privacy masks, because it would take to much resources to decode (and encode) all images and paint on top of those. And as a result I have to specify such privacy masks inside the web ui of my IP cams instead, so that the images send to Frigate (via an RTSP stream) already contain the black colored polygons. Is that correct?

no you can block out areas in Frigate to ignore - but it makes a lot more sense to do it inside the camera software as this removes the processing overhead from Frigate.

Craig

1 Like

Does that mean you cannot have a dashboard camera in your car?

Yes indeed you have motion masks and detection masks in Frigate, to make sure it doesn't do motion detection or object detections inside those polygons. Which increases performance.

But I didn't find anywhere in the documentation that it offers privacy masks, which means the pixels inside that polygon are completely replaced by some fixed color. Like for example in the picture below (which I got from this Reddit discussion):

Don't watch what your neighbours are doing :sunglasses:

Ah good question. That is indeed something I told to a Tesla car owner last week: I have to put a sign at each driveway about my video surveillance. And I am not allowed to film in public areas. But you can put your Tesla car everywhere, with panoramic video surveillance. And you don't have to inform anybody about they are being recorded. We are all equal for the law, but some of us are a bit more equal :yum:. Anyway if anybody wants to discuss about that further, please open a separate discussion because it is not Frigate related!

Ok my last topic for today.
My current basic setup looks like this:

So Frigate captures - via FFmpeg - two RTSP streams from my camera:

  • A low resolution & low fps stream for object detection
  • A high resolution & high fps stream for recording

That low resolution & low fps stream is being displayed - via JSMpeg in their web interface inside my browser. But of course that means that the Live View is low quality, because it is based on the detection stream instead of the recording stream.

So I would like to have the high resolution & high fps stream in my Live View, but without heavily loading my camera and my LAN network by opening an extra main stream to my IP cameras.

Fortunately Frigate allows us to activate go2rtc, which is an open-source camera streaming tool. I did a quick reading of the Frigate documentation, and I "think" it works like this:

Which means that go2rtc (which is running inside the Frigate docker container) is capturing all the RTSP streams from all your camera's, both the main and sub streams. The go2rtc will restream all those RTSP streams to its port 8554. So both your browser (via WebRTC) and Frigate can read the same RTSP stream via that port.

If anybody can see errors/improvements in my drawing, don't hesitate to let me know! For exampe I don't know if WebRTC in the browser will read directly from go2rtc (instead of via Frigate). Or if anybody has some tips about this setup, that is also welcome. So I can try it out in the next days...

The same thing is here in Sweden as well, maybe is an EU thing, you have to have privacy masks for fixed installed cameras but you are allowed to run around with your mobile and record anything without restrictions. As well as using the dashcam (and other built-in cameras in the vechile)

I dont think the regulation and law is up to speed of tech developments. What is really the difference between your fixed installed home security cameras and a parked vechile with it's internal video surveillance system activated?

Always to consider is the eventual additional CPU load such an addon might give. As long as there is no conversion I think you are fine (RTSP to RTSP will most likely just be copying the stream). Otherwise, in case format conversion is needed, you would really prefere to use the GPU's and I'm not sure it supports it (maybe it is since it is actually using ffmpeg behind the scenes). Anyway, worth testing I think

1 Like

My most recent rebuild of Frigate, has been running since late last week, I found the following on a Pi5 4G running latest Raspberry OS and Coral TPU with three Reolink Cameras (but 4 streams!).

if I put the *_main stream through go2rtc, Coral was at 70ms+ for inferencing and ffmpeg CPU was also very high causing streams not to appear in the Dashboard. I then tried doing the *_sub through go2rtc and then a recording straight from the camera to role: record with the detection and audio to rtsp://127.0.0.1:554. I am pretty sure that I didn't get any streams when reviewing my 'detections' - just a snapshot, so I now don't use the *_main stream at all.

Inferencing on Coral for all of the cameras is now ~31mS and Pi5 CPU is at ~9%. No warnings for high usage.

Just to confirm, haven't found any Privacy Mask in Frigate so use the Privacy mask in the Reolink App/web page. Of course, if you move the camera Privacy mask needs to be redrawn and the 2nd (telephoto) stream on the Trackmix can't have Privacy areas set as they move with the image (set to Digital Tracking only), and just pick up the Privacy masks set up for 1st (wide-angle) stream, which is not moving.

1 Like

Had a few troubles to get go2rtc running.
So (like @krambriw also mentioned this morning), you should use a video/audio codec in your camera (for both streams), which is compatible for most browsers: H.264 for video and AAC for audio. By doing it that way, go2rtc does simply need to restream the RTSP stream from its input to its output, without too much of calculations.

In my case the main streams of my cams worked immediately but the sub streams not. At the end it appeared that in my cams the sub streams where MJPEG instead of H.264:


After changing it to H.264 it was immediately solved.

Based on my troubleshooting, I have added some more details to my drawing:

Some remarks about how I did troubleshooting of my camera streams that didn't work:

  • All camera (main & sub streams) are now captured by go2rtc (which is also avaible out of the box inside the Frigate Docker container).
  • Always first try the rtsp url from your camera (e.g. using VLC media player) to make sure the stream (and username/password) are correctly setup in your camera:
  • When you do port mapping of port 1984 in the Docker compose file, then you can get access to the web interface of go2rtc. That way you can try whether the stream can be played there (which was not the case for me):

    So you can find a lot of useful info here, to troubleshoot all the streams you have specified for go2rtc in your frigate config yaml file.
  • You can also use e.g. VLC media player to watch the restreamed RTSP stream at the output of go2rtc, i.e. at port 8554 where also Frigate is listening to use those streams. You need to use the camera name (as you have specified it in the go2rtc section of your Frigate config yaml file):
    rtsp://<ip address of the machine running Frigate>:8554/<name of the cam>
  • Have a look at the logs. For example in your Docker container:
    sudo docker logs -f frigate
    Or in the Frigate web interface in the "System logs" menu:

consider is the eventual additional CPU load such an addon might give

Yes you are correct. I have no about 20% cpu usage instead of 14% before, but during troubleshooting I had to change my substreams from MJPEG to H.264 with other resolution. So then it becomes harder to tell that go2rtc is consuming all that extra cpu...

Moreover - see my initial experiments in the early days above - I had used input_args: -skip_frame nokey to improve performance. However for go2rtc I don't know yet where to put that now in my yaml file, so I have removed it temporarily. Which 'might' also introduce extra cpu usage perhaps.

I did not have time yet to check all of that. What could cause this, because the object dection is always executed by Frigate itself (independent whether your streams pass via go2rtc or not)?

Could you please explain that i bit more in detail? My brain is still melted from the daily job...

2 Likes

I had expected that I could now select in my use interface the type of streaming I want to use in the Live viewer. See the screenshot in this Github issue. But I can't find that dropdown anywhere in the ui. Anybody any idea where I could find it?

Just what I thought. When I look in the "Network" tabsheet of my browser's developer tools, it seems like jsmpeg is stil being used for my live viewing:

Which is not what I want.
Had hoped that I could select somewhere to be able to use webrtc. But can't find it.

EDIT Seems I need to specify for each camera explicit stream for live view.

Thank you for the go2rtc viewport! That has a lot of information I will have to review!

I checked all streams in VLC first, well, I actually did it second on the first camera. Then, when no stream came up, I now do it first as a check.

But I can't find that dropdown anywhere in the ui. Anybody any idea where I could find it?

I just had a quick look and haven't seen the drop down to select these stream types, but will keep a look out.

When you go to the Frigate Playback screen you get a snapshot of any detection. If I clicked on the snapshot, there was no video available to play. This could have been me, but as soon as I changed Frigate config.yml to only use the Sub streams, the problem went away.

Have some chores to do today, so unable to immediately check the very good information given with my simple understanding!

Hey @BartButenaers I am using frigate v0.15.x

I use the node-red worldmap to map all my camera locations and then link it to the frigate server.

I'm in the process of migrating from motioneye.

Things I figured out that I probably should share on this thread.

Docker - I use basic docker command to run frigate not Docker compose
I use NFS shares not local storage
Here is my run command, note: ghcr.io/blakeblackshear/frigate:0.15.0-rc2 I never use latest as it may introduce a breaking change.

docker run -d \
--name frigate_0_15_0_rc2 \
--restart=unless-stopped \
--shm-size=4g \
--mount type=tmpfs,target=/tmp/cache,tmpfs-size=4294967296 \
--memory=128g \
--memory-swap=128g \
--cpus=70 \
--mount type=volume,source=frigate_10_X_X_1_NFSdockerVolume_media,target=/media/frigate \
--mount type=volume,source=frigate_10_X_X_1_NFSdockerVolume_config,target=/config \
--device /dev/bus/usb:/dev/bus/usb \
-v /etc/localtime:/etc/localtime:ro \
-e FRIGATE_RTSP_PASSWORD='yourpassyawanthere' \
-p 8971:8971 \
-p 8554:8554 \
-p 8555:8555/tcp \
-p 8555:8555/udp \
ghcr.io/blakeblackshear/frigate:0.15.0-rc2

Streaming mpeg video - I have live view screens for viewing truck gate and other areas. You can view this stream via the frigate api.
example:

http://10.X.X.1:8971/api/YourCameraName?height=1440

see frigate API mjpeg feed

Latest Frame - in the camera map image shown above I use the frigate API to grab the most recent image from a camera to preview it.

http://10.X.X.1/frigate3/api/YourCameraName/latest.webp?height=275"

see frigate API latest frame

Multiple frigate servers on one node-red world map use Nginx - as you may notice the above url does not use a port number. I use nginx.

one thing to pay attention to in this proxy example is: proxy_set_header X-Ingress-Path "/frigate3"; This is how you handle multiple frigate backends like frigate1, frigate2, frigate3. name them how ever you like.

# frigate3
    location = /frigate3 {
    return 302 /frigate3/;
    }

    location /frigate3/ {           

        proxy_pass http://10.X.X.1:8971/;

        proxy_set_header X-Ingress-Path "/frigate3";

        proxy_http_version  1.1;
        proxy_cache_bypass  $http_upgrade;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection "upgrade";
        proxy_set_header Host $host;
        access_log off;
    }

Just installed 4 coral usb devices on frigate2 server I'm migrating everything to now. I'll soon see howmany cameras it can hold.

Fun Screenshot of my frigate1 server. Not many people are going to see these numbers :slight_smile:

Feel free to ask questions.

2 Likes

@meeki007
Really nice that you share your ideas!!
That seems like rather highly professional stuff already...

Really cool integration! That is a very interesting thought, which opens many doors for fancy stuff. I love it! Do you simply show an mpeg stream in a video element?

It might be interesting for people to know that you can easily show e.g. the bounding boxex (of detected objects) on top of the mjpeg stream images by adding an url parameter:
http://10.X.X.1:8971/api/YourCameraName?height=1440&bbox=1

1 Like