How to use external NVR (Frigate, BlueIris, ...) with Node-RED

Yes dont overthink it Bart - you are burning more energy than the Rpi !!

Start off simply - you have found the substream and perform your detection on it. Mask out areas of spurious movement and block them out so the FFmpeg process is not looking for movement with trees blowing in the wind etc (you can do this through the cameras web interface ) or through masking in Frigate.

Hook into the MQTT stream to get notifications of image types and detection confidence and then use Telegram (as the easiest) way to send info out through Node Red.

Craig

Thanks for sharing it! But I have also some Amcrest cams, so then it is easier to use Onvif to communicate with all cams in the same way (to keep my flows simple). Although I assume the API offers more functionality...

Is that based on some info from this topic? I "think" it is more about sending events from Frigate to Node-RED. For example when motion is detected. In that case the MQTT message can contain a snapshot image.

And could you also share how you solved it?
Indeed good tip to test the coral after a power off/on.

Yes that is indeed always the pitfall. But in this case I had already experienced a lot of issues in the past, while developing most of this stuff myself inside Node-RED. Until now all my above analysis was very useful for me, otherwise it would never run on the hardware I currently have.

2 Likes

Exciting thread to read, like it a lot!

I'm not sure what Frigate can do but my approach is to only analyze images from my cameras if there has been a movement detected. I would try to do the same if I would use Frigate to avoid load on both the CPU and the object detection "thing" (the "thing" being the TPU or an other object detector) as well as reducing an eventual recording to hold only "interesting" frames that can be studied afterwards

Give you a description of my setup that is working very well since years, lately updated with a RPi5 for the TFlite object detection part

My cameras are pretty old now, just usb types but with a decent resolution and night vision capability, for sure a later generation is better. All cameras are providing 1024x768 images that are continuously analyzed by the Motion software (running as a service), just looking for motion in unmasked areas. If and when motion is detected, images are sent further to the next step, the object detection analyze.

This is the "flow" so to say:

Cameras -> Motion (motion detection, masking, etc) -> images with motion via MQTT -> TFlite etc in RPi5 (object detection, drawing bounding boxes, recording etc) -> Node-RED if defined objects are detected (Telegram, Dashboard etc etc)

The object detection is running TFlite using the CPU of the RPi5. Since it is only analyzing images that has been considered "of interest", the total CPU load is very limited (I do not monitor the CPU load, only the RPi5 temperature and it is really stable, around 53 degrees C). Furthermore, a daily videorecording is built with all images that are received, makes it simple to check afterwards what happened during the time before, during and after the event

(just for info, the TFlite is using the Python library. It does also support the TPU but I do not have such)

Additionaly: I'm not so convinced about that cropping of 320x320 parts. I let my analyzer run analyze on the full image and it works really well detecting people objects. If you crop, isn't there a risk you will loose some objects in a "wide screen" image? Or will there be multiple crops made from a single image? Myself I tried to resize the whole image to 320x320, also while keeping the aspect ratio, the detection confidence went down

Hi @krambriw,
Glad to see you arriving here. Think your experience will be needed from time to time...

As you can see in my diagram above, Frigate does motion detection (i.e. comparing pixels). It will only do object detection for parts where motion has been detected. You can tweek the motion detection parameters live in their web ui, which is quite useful. And they have both motion masks and object masks, so both for the motion detection and object detection you can mask areas which need to be ignored.

Yes, but that part I don't understand yet. They have documented it, but I don't get in detail how it works yet.

For the Coral TPU stick, I use their edge-tpu-detector (see their docker file). Since that seems to be demo models trained on the COCO dataset. , these look nothing like a security camera perspective and is why there will be more false positives. Therefore they are creating a new model for security specific, but that is only available in the paid Frigate+ plan only unfortunately.

I have to be honest that I don't know yet much about all that stuff like openvino, and so on... But from what I understand: if you use the default model (which is trained on the Coco SSD image set) then you should hope that burglars are dressed in a banana-shaped costume ( :banana:), in order to detect them without too much false positives :wink:

1 Like

Hopefully the "person" :person_standing: will arrive on a "bicycle":bike:, carrying an "umbrella" :open_umbrella: and a "kite" :kite: and only steal your "giraffe" :giraffe:

:rofl:

1 Like

Happy New Year to the video surveillance lovers :champagne: :clinking_glasses:

Experimenting with Frigate parameters, while drinking a coffee and the wife and kids still sleeping. Life can (sometimes) be so nice :yum:

Just sharing here an example of the "Debug" view, so readers knows how it looks like:

  • When motion is detected, Frigate tries to group up nearby areas of motion together. That way they try to find a rectangle in the image that is useful to inspect afterwards for object dection. These are the RED motion boxes.
  • Next Frigate calculates a GREEN motion region around those motion boxes, to run object detection on. The used object detection models are trained on square images, so these motion regions are always squares. They adds a margin around the motion area to get a cropped view of the object moving.

The documentation says the motion region "doesn't cut anything off", but I still don't understand how that works for larger objects.

When the motion boxes overlap, they will be merged to a single larger motion box. But when motion boxes are too much separated, they will be investigated separately. So overlapping motion regions of course will not be merged, but send separately to the object detection (because they refer to separate objects):

Note at the bottom that the timestamp (added by my Reolink doorbell itself) will always change, so it is best to add a motion mask on top of that timestamp. Because it is a waste of CPU to do motion and object detection on that part of the image, so ignore those pixels during motion detection.

Note also that in the first image the motion box is nicely in the center of the motion region, because that is the most optimal that Frigate can do. But of course in the second image, the motion box of the timestamp will be near the edge of the motion region.

1 Like

Fortunately you don't need to add the polygon coordinates manually into the yaml file, but the web interface already contains tools to maintain this part of the yaml file in a user-friendly way:

motion_mask

Looks nice!

I think the TFlite w Coco SSD model is working pretty well, at least when detecting people. Earlier I used the more resource hungry YOLO for years, also working very well, but I have changed. TFlite is also faster

As I mentioned, my setup is a bit "customized" to my needs and own ideas. Recording videos from events is something I have done for long but afterwards, when you really need to investigate, I said to myself, why do I record frames of no interest??

So I started looking into a possible solution where I could compile a video, holding only frames of interest. Meaning frames just before, during and after an event. Mixing frames with movements from several cameras "while they are happening". The result is quite interesting; from an investigation perspective, you have everything there condensed, several views of the incident from various angels. At first, the impression might be that this is a bit messy. But after a closer look, starting and stopping the video, dragging the slider, it gives a lot of information regarding the intruder

The below example is real. It is from our drive way. Someone is entering the monitored area where our cars are parked. Automatic lighting is turned on. Nothing was stolen or damaged but you can clearly see that the intruder is looking for something, finally gives up and leaves (on an electrical scooter, seen on the last frames)

(Each detection frame is added 5 times to the video to make it more viewable, the video stops shortly, and then continuous)

(Can't upload a video so I shared it from my Google drive)

1 Like

Bart,

I'm still paying with my new Reolink doorbell camera.

I setup your onvif node and also webhook "hack".
As expected I get a message from both nodes when a person is detected by the camera.

So I think either option would suffice for triggering further actions in NR.
I know you had a specific need to trigger your door bell outside of NR with webhook, but for other uses onvif might be enough.

I also set the camera to FTP an image to my NR server on detection.

Regarding object detection, providing the cameras own person detection works well, then I don't think I need the COCO model to check if there is a giraffe at the door :wink:

So just because it should be possible, and might be fun, I plan to use node-red-contrib-facial-recognition and see if I can get Alexa to tell me WHO is at the door :thinking:

1 Like

Just as an aside we have the Dahua TIOC cameras - the quality is not the best - but the thing they do have is Active deterrence - which i think you could implement with Frigate and NR if you wanted to.

The way it works is the camera has red and blue flashing lights and an inbuilt speaker - when motion of a given type (say person) is detected (and this is by the smarts in the camera not Frigate) - it will flash the blue and red lights and also play an optional voice message (You are under Surveillance is what we have at our front door and front path)

This has proven to be a very effective deterrent - so much so that i have not bothered using the Frigate MQTT output to supplement it yet.

We will be shortly moving house and when i redo my surveillance system i will get better cameras but come up with a solution that can be triggered by Frigate and MQTT to do the same thing.

I am not sure if people are aware but the most important thing in a camera is the size of the sensor - not the number of megapixels

Craig

1 Like

Yes you are absolutely correct. Only for simplicity it would have been nice to be able to replace onvif by webhooks.

However Frigate contains features that are very useful, and for which we unfortunately don't have enough contributors in our community:

  • Tracking objects: as soon as an object enters in a frame, Frigate starts tracking its movements. So across multiple images, Frigate knows that it is the same "active object" (which gets an ID). Even when it doesn't move anylore, it becomes a stationary object Which provides them a lot of useful information...
  • When there are overlaps in time of different events, those will be grouped as a review item which is a time period where any number of events/tracked objects were active. I have not read about this in detail yet.
  • The seperate events or the combined events (i.e. review items) can be viewed in the web ui. I assume that is like @krambriw does with his slider. Which I find very useful, as you can see in his video. But at the moment I don't expect ui widgets popping up to accomplish that in dashboard D2..

At the moment I really like the powerful features of Frigate. But I find it hard to setup, and if the "magic" fails it is hard to troubleshoot. So I am still a fan of doing this stuff inside Node-RED, because then you can e.g. put a Debug or Image-Output nod in between the motion detection and object detection, so you know very quickly what is going wrong. Moreover you would have MUCH more flexibility by adding extra nodes in between. But seen the low interest of people to contribute Node-RED nodes related to this kind of stuff in the last years, I don't see it happening anymore...

On the other hand, why would it be wrong to add a "Rucksack/backpack" to Node-RED with features & functions that are hard to acomplish inside? When analyzing complex video it might be Node-RED is not really the most optimal platform, maybe also for some of the visualization parts. Instead, let those things be handled outside and then results can be communicated back. Node-RED can then "focus" on event handling, automation and other "smart logic" we think we need

4 Likes

Wanted to chime-in. I'm building an open source Traffic Monitor (https://www.trafficmonitor.ai/ and repo) using Frigate NVR (in Docker) and Node-RED (as a system service, but have also ran it on Docker) on a Raspberry Pi (RPi 5). It uses Frigate's object detection with a Coral TPU as a co-processor, and Node-RED takes in the MQTT messages to attach to other sensors to complete the events. Help me build more capabilities, or run with what I have to start...

Frigate's software is TIGHT. I mean, it's strong for motion detection, object detection, hardware support, model support, APIs, and, and, and... Using it as a base with Node-RED is the way to go, even if you start from scratch.

3 Likes

May be of interest in this thread -

Benchmarking TensorFlow and TensorFlow Lite on Raspberry Pi 5 - Hackster.io.

2 Likes

Very interesting benchmark! I was completely unaware of the capabilities of the Raspberry Pi 5 in this area. And I was also not aware of the fact that Google was neglecting their Coral TPU sticks. Thanks for sharing that link!!

@glossyio,
Thanks for confirming this. I see that you are using an RPi 5 in combination with a Coral TPU stick. So the cpu isn't sufficient for your case (as in the last benchmark link)?

@BartButenaers
I had never seen that benchmark, but I have now put it "on the backlog". It would be nice to keep costs down by using only the CPU. Although, I do like offloading the CPU work to the TPU, since it uses very little power and works well for multiple cameras (the 4 TOPs TPU can handle 100 FPS+) for object detection.

Actually, I did a quick test switching from the edgetpu (USB) to cpu in Frigate with the Tensor Lite model: RPi5 CPU Inference Speed keeps up (~25ms vs. 8ms with TPU), but the RPi5 CPU utilization spikes to 70-90% across all CPUs while it's detecting (vs. ~20% with the TPU, which is mostly due to motion detection, not object detection). RPi5 would likely lag or lock up if there was a lot of motion (windy day or lots of objects) and object detection is inferencing. It's a quiet day on the street, so hard say right now.

So, CPU object detection works on RPi5 in a pinch, but the TPU really does offset a potentially crushing CPU load, even if inference times keep up either way.

Note, this is with 1 camera feed, 2304:1296 at 15 FPS. Tuning that down could leave more overhead with a few minutes of observational numbers.

Edit (attached a couple screen shots) of htop command and Frigate's system metrics.

2 Likes

Geez - you are a hard man to please - i actually have been using this for a couple of years now and have found it easier and easier to setup - once you have your head around docker etc.

Are you still running into problems or are you on top of it now ?

I find more and more with my Home Automation stuff - i want to do as much as i can with high level apps/services that are designed for the purpose built role (as long as i am not giving up too many features) and then use Node Red to glue it all together.

Its like the old days when the Arduino and then ESP chips first came out and you did it all in C code - much happier to hand off all the low level stuff to Tasmota now and just concentrate on the NR piece of the picture.

Craig

1 Like

I guess what would really help is GPU acceleration for arm chips ?

However for a small setup its encouraging to see the Pi5 CPU performance.

With some tweaking a Pi5 could be adequate for my needs.

Very very sad to hear that this is your opinion about me.

I really thought all my feedback about Frigate above was constructive (and others could learn from my noob mistakes), but that doesn't seem to be the case unfortunately...

Well I have a big free time problem due to personal issues, so I had hoped that tools like frigate would get me up and running fast. But it took me quite a lot of my available time (percentage wise not hours!!!) to get the cpu usage low. Then I had first too much objects detected but after some tweakings it doesn't detect me anymore in most cases. I have no idea why, so then I wrote above that I find Frigate hard to troubleshoot when their magic fails (which is probably completely due to some mistake I made...). Even with their magnificent Debug view.

But my holidays are over, so will need to put this aside until I find some time again. Others can keep posting here if they want. But for me it is temporary game over...

1 Like

Throwing in my experience running the TFlite on the RPi5; It is ok if you do not push a lot of images to fast to it. In my case, a detection round on a single image takes on average some 0.07 seconds (indicates a FPS of 14). But if I feed the detector with many images it drops down to 0.2 (FPS down to 5)

So definitely, what I think, a TPU is superior. Would for sure be nice with GPU support on the RPi5 but I guess, that is maybe not top prio for Google