How to use external NVR (Frigate, BlueIris, ...) with Node-RED

Hi folks,

I have been spending a huge amount of free time over the years, in a desperate attempt to introduce video surveillance in Node-RED. But unfortunately I have lost the battle. Tfjs (Tensorflow Javascript) is not maintained anymore by Google, and our good friend and video guru Kevin isn't active on Github anymore. So I am going throw in the towel :frowning_face:. Which in reality means none of my Node-RED stuff is making progress anymore...

Seems that some people (like @dceejay, @craigcurtin, @ozett,... ) are using Frigate now. Did some quick reading about it in the last hour, but no clue how to get started. For example it seems currently not possible to show live streams (i.e. only after motion has been detected), but that 'should' be solved in version 1.16 (which will not be for next week, because the beta for 0.15 is currently ongoing).

And also I have no clue at all how to show the streams in our new dashboard D2, so I have asked it on their Github repo.

But I have so much other questions popping up in my head:

  • Do you all use Frigate via Docker?
  • Do you setup everything in yaml files, or is there another (more Node-RED like way) perhaps? Because the whole yaml stuff is nice for HomeAssistant users, but I am not very attracted by that...
  • The communication is via MQTT I assume: what kind of info do you send to Frigate, or do you only receive info from Frigate?
  • And so on...

Thanks!!
Bart

Hey Bart,

Yes i moved to Frigate last year.

Yes i run it in a Docker Container (by far the easiest and lowest maintenance way) and have all images stored on an NFS share from my main storage system.

It is fairly quick to setup and more importanty very easy to do and play with to test out.

Yes the basic config file for Frigate is in YAML format which i too hate - but once you get to using it with a decent editor (i use Notepad ++) it is not too bad. They are really good on the github issues page with answering questions.

Yes it spits out a continuous (almost overwhelming) stream of MQTT information which is brilliant if you are prepared to drink from a firehose. !!

I run mine on a Linux host that runs all of my Docker images and has a Coral USB connected for Inferencing.

On this machine i have 2 x 12TB WD Purple drives in a mirror config - with 6 cameras i set it up to store until the disk is full and then overwrite when space is needed - on top of that i then set the retention for detected events (such as People etc) to be 3 months. It will use this as its preference and will delete realtime imagery to retain my stored amount.

With this config i have 3 months of retained events per camera (i have defined these as 15s before and 10s after) as well as continuous footage of approx 24x7 footage for about 28 days

I can click on any of the cameras and it will show me the realtime vision - i also use the Go2RTC component which essentially creates an instance of the stream that anything can tap into (it is essentially multicasting the stream) so i could if i wanted to have a seperate realtime dashboard in NR by tapping into this and displaying it with the appropriate viewer.

Fire away if you have any more questions.

Craig

2 Likes

Hi Bart,

I'm running BlueIris on a dedicated windows laptop which is doing the heavy lifting along with AI-analysing (codeproject.AI) the video-feeds. BlueIris is using my NAS to store the data longterm.

All alerts along with video-file and alert-picture (jpg) is stored by BlueIris and a MQTT message is sent to NR (below is my alert history for 2 cameras in frontyard)

I have no comparison between Frigate and BlueIris but I guess they have similar capabilities

Hi guys,
Thanks for the extensive responses. Really appreciated! That makes it much easier for me to get an idea of your setups.

Drinking water from a firehose. I should remember that one. It nicely summerizes my Node-RED "hobby" at the moment :thinking:

A few questions:

  • My idea was that Frigate was some kind of compact system that was tuned for performance. But for some reason I now get the impression it is some kind of CPU eating monster. Or somewhere in between? I assume I can't run it on a Raspberry.
  • So you suggest to use their app, to show the streams. While that is again perpendicular to what I hoped to achieve with Node-RED, I assume there is not much other way. I had hoped there would have been some people contributing stuff to Node-RED in somehow related to this kind of stuff, but that wasn't the case by far unfortunately...
  • You use the Go2RTC inside the container, or did you have installed a separate one?
  • In my above discussion on their repo I already got an answer: they tell me that I need to get the RTSP stream from Go2RTC. Now I somewhat understand their feedback, because you say it is doing multicast. But I had thought that the container would also publish some kind of HLS stream or something alike.

I have updated the subject of this discussion. I was already highly focussed on Frigate, but let's talk about NVR's in general.

I did not have a look yet at BlueIris, because I still need to convince my brain that all of the video processing will be outside of Node-RE. And for me stuff like BlueIris means it is a completely separate system. I need some extra time to digest that idea :yum:.

I had never heard of codeproject.AI before. Sounds interesting. I assume BlueIris is calling that module, instead of you directly? Or are the MQTT messages coming directly from the AI container perhaps? From what I just read is that you can also install that container separate from BlueIris?

Do you show somehow the video (with the bounding boxes around the detected persons) in your Node-RED dashboard?

I did a demo here: Migrating ui template from Dashboard-1 to Dashboard-2 - #8 by Steve-Mcl that @krambriw ran with (and improved) you might be able to use as a basis.

1 Like

Hi Bart,

as you have learned with your video-surveillance on NR system :frowning: it does take a lot of CPU resource to manage, stream and analyse several cameras. That is one of the reasons that @craigcurtin is using a CORAL USB (dedicated for video-processing) to handle the data-load with Frigate

my Video-setup:

  • 1 x 2nd hand windows laptop (16GB / 256GB SSD) with nVidia GPU (was about $150 last year)
  • I only use the BlueIris app to configure/change the settings
  • I have customised the MQTT per camera feed so I get dedicated messages into NR
  • each MQTT message is containing the alert information including JPG and file link to my NAS where the respective video-stream is stored
  • as mentioned in my earlier post I show the alert-JPG in NR-Db2

  • when I click on the respective JPG a new browser-window is opening to play the video from NVR
  • the alert-JPG is also send to my TELEGRAM account so I can see all alerts when I'm away
  • you can run BlueIris with or without CodeProject.AI as it already has build-in capabilties :wink: i just liked to play with this option :slight_smile:
  • Yes,BlueIris is autonomosly operating and will call CodeProject.AI directly
  • Yes, I have installed BlueIris on Docker containers and on the main BlueIris machine to distribute the load
  • BlueIris will manage all your MQTT messages, which you can configure freely per video-stream
  • Additionally, BlueIris can generate http video-live-streams which you can easily use with ui-template and iFrame

@Steve-Mcl
Your flow already was a great help to me some time ago as a starting point for my node-red-dashboard-2-ui-video node. Although I am stuck with that node, because I lack the free time to handle all the issues I registered in that repo...

But problem here is that I don't know how Frigate publishes live streams. The answer I got from their team is that I need to get an RTSP stream from their (multicasting) Go2Rtc module. But then I need to convert it myself - via FFmpeg - to Hls, which is exactly 100% the same as what I have now. Except that I read the RTSP streams from my camera's. In that case it somehow feels of putting Goliath next to David :thinking:

Summarized I failed to create a full Node-RED video surveillance due to this stuff:

  1. The Tfjs project was abandoned by Google. If I look in Github at the maintainers profile, it looks like they are still doing Tensorflow stuff, but not in Javascript anymore. I had written all the Js tensor-handling code to do object detection on a Coral tpu stick with very low cpu usage, but I needed some stuff fixed by the Tfjs team. So I am convinced already we will never be able to do this kind of stuff anymore in Javascript, which means it needs to be done externally (e.g. in a Docker container).
  2. Drawing bounding boxes around detected objects is VERY cpu intensive in Javascript. I found some decent libraries, but that means calling C++ dll's. Allthough npm library wrappers are available, it would be better if the object detection system could do the bounding box drawing also. Because then you only need to install 1 external system.
  3. I had already developed a node to capture RTSP streams in Node-RED, but I had no time to figure out why I couldn't grab without high CPU images from the stream. Thought I was extracting the I frames correctly, but seems somehow not. When you install an external system, it makes more sense to have that system to capture the RTSP streams itself. Since they have more knowledge about how to do that decently.
  4. I developed a node to cleanup folders based on some parameters (e.g. file retention period, ...) but if you have an external system, it has no added value to let Node-RED do that.

So my brain is prepared to do that stuff outside of Node-RED. We simply lack contributors to do such kind of stuff. However it would be nice if Node-RED somehow was still in control. That could be informed about events (e.g. person detected) from the external system via MQTT. And it would be nice if the external system would make streams (live and for recordings) and snapshot images available, so we can show them in our dashboard. Then we do all the heavy lifting outside of Node-RED, and we only need to focus that we have some widgets in dashboard D2:

  • To display such streams
  • To select recordings e.g. a timeline
  • ...

And until this is ready, we can still use the third party user interface.

Hi Bart
As mentioned elsewhere I also use Frigate to handle my video. As others I run it in docker. For me it was easy to setup and once the very basics were going I could edit most of the settings via the web app. No need to edit the yaml again.

I currently only run one camera and that does consume about 20% cpu but seems steady and has never crashed (famous last words).

I have configured various zones so that it ignores cars on the road, but detects people loitering for more than 10 seconds, and detects people or cars on the drive at any time.

I use the direct rtsp feed from the camera to also feed my Node-RED dashboard but to be honest I rarely look at the live stream, and just use the frigate web app to review any events.

I use the mqtt “firehose” into Node-RED highly filtered, to create alerts to my phone using Pushover which include the “recognition” image that caused the alert, with a link back to the live system if I want to take a peek.

1 Like

Ah ok, so if you only install the AI container you - of course - don't have stuff like the live http streams. Which means you need to use all their stuff or nothing at all. Ok thanks for giving such a detailed insight in your system! It really makes sense the way you do it already. My old brain needs to digest the information :yum:

Hey Dave,
You mean that you need to setup it once via yaml, and maintain it afterwards via the UI? Or can you do a setup entirely without yaml?

Isn't 20% cpu a lot? Just asking because I had received Reolink outdoor camera's under 3 different christmas trees. So if start using those, that would mean I would end up above 100% :thinking:

btw .. I'm currently running 8 camera feeds (6 with 24h recording) on my old laptop with BlueIris and it uses max 30% of my cpu (i7-8th gen) :slight_smile:

yes I do have live http streams from all camera's by default via BlueIris and can watch em via NR

As said, the heavy lifting (video-analysis) is done by BlueIris and the alert and live-streams I watch via NR :slight_smile: or than on TELEGRAM

1 Like

It "looks" to me at first sight that BlueIris is a windows program. But I would to like to run a Docker container. On the other hand their codeproject.AI module is a Docker container, and this video on Youtube shows how to use it from Node-RED to detect objects in images. He uses the node-red-contrib-deepstack nodes, because the codeproject.AI container offers the Deepstack API (since BlueIris seemed to be dependent previously on DeepStack, before that was abandonned...).

When you look at the code of that deepstack node, you will see that it sends the images via http to the codeproject.AI Docker container. Which means you need to extract the images from the camera stream, and do expensive http calls (which involves a lot of Js code execution) for every image:

That is all nice for a few images, but not for live object detections...

Based on this info, doing only the object detection part outside of Node-RED does not seem very realistic too me (unfortunately). The external system should also be handling the RTSP streams, and take care of the object detection and push the results of that to Node-RED via MQTT...

So the mechanisms shared by others above seem to be the most optimal way to handle this. Damn I had hoped to do at least a little bit more of the video stuff inside Node-RED :yum:

EDIT: I have updated my image above to allow people to understand more easily why it seems to be a rather cpu intensive setup, when only the object detection would be delegated to the CodeProject.AI Docker container

In the recent versions, the Frigate UI has made tremendous progress. Redoing all of it in NodeRed dashboard is certainly doable, but it can't be described for me as "some widgets"...

Frigate is quite CPU intensive (if used without a coral GPU) as it provides a good object recognition.
Eg car, bicycle, person...
For example, during the working days, if I am coming home on my bike (so person + bike detected), then, my garage door is opened to park my bike.
Detection is Frigate. MQTT message is sent.
Automation is NodeRed.

If motion detection is sufficient, then, Agent DVR is a good alternative.
All is configured through UI. No Yaml :wink:

A Docker version is available.

For me, BlueIris is a no go. Running something 24/7 on windows is not my cup of tea...

1 Like

Fair enough!
Let's see how it goes in the future...

Like you english speaking people say: the proof of the pudding is in the eating. Which means the first job is to find my Coral tpu stick. After Google ditched their tfjs project, I have thrown it away somewhere in my boxes of unused stuff...

If you already have a coral stick, then, Frigate is the "right" choice...
The first setup is a bit painful, but, unless you change your cameras every week, it shouldn't change much.
And you can use How to connect to network and IP cameras to find the rstp stream to use.

FYI, I'm not English and English is not my mother tongue either :slight_smile:

1 Like

I am VERY interested in the last (missing) part of that sentence :joy:

Typed enter to quickly :blush:

1 Like

Hi. Re yaml. In my case I did hand create initial file. No idea if I didn’t have to. Maybe no need but all docs seemed to mention the file and nothing about the web page. Once initial system was up and running then easy via the web page for most things. You can then hand tweak if needed and you understand what is going on

1 Like

There is a great doc page re the Frigate config yaml options here Full Reference Config | Frigate. But as it says. Only use for reference. Do not copy as-is.

1 Like

I think some may find this useful when needing to use YAML. It has direct comparisons between YAML and JSON data structures.

http://thomasloven.com/blog/2018/08/YAML-For-Nonprogrammers/

1 Like

Indeed we have a yaml node that will convert one to the other and back