AI enhanced video security system with Node-RED controller/viewer

I've finally got my code up on GitHub with a that should help with the initial setup and some sample images in the Wiki.

This is the evolution of: Security Cameras with AI Person Detection

All the code can be downloaded from:
Python AI-Person-Detector with node-red controller/viewer

If you give it a try and have any issues, raise them on the Github and together we will improve the Wiki. This current code has been running for about six weeks now on an i7-4500U "Mini PC" so I think the Python code is pretty solid. With a single Coral TPU it gets ~39 fps servicing fifteen 3 fps rtsp streams.

The node-red flow json file is simply a starting point. I've expanded it for my system:

I've tested on Ubuntu 16.04 and Ubuntu 18.04 PCs, Jetson Nano, Raspberry Pi 3B & 4B, Odroid XU4, Google Coral Development Board, and Atomic Pi. It supports NCS/NCS2, CPU only AI (not useful on CPU without altvec instructions) and Coral TPU. I strongly recommend the Coral TPU.


Hi @wb666greene,
Thanks for sharing your knowledge. Will start playing with this somewhere in 2020. Is there a recommended hardware list, because I assume you have experimented a lot (acceleration sticks, boards, ...)?

Yes, please share your performance review when using a RPi 4.


Some performance results from the on IOT class machines:

  • Using 5 720p netcams with Onvif snaphots.

      • Pi3B+ running Raspbian Stretch:
        • NCS v1 SDK ~6.5 fps
        • 2 NCS v1 SDK ~11.6 fps
        • NCS OpenVINO ~5.9 fps
        • 2 NCS OpenVINO ~9.9 fps
        • NCS2 OpenVINO ~8.3 fps
      • Odroid XU-4 running Ubuntu Mate-16.04:
        • NCS OpenVINO ~8.5 fps
        • 2 NCS OpenVINO ~15.9 fps
        • NCS2 OpenVINO ~15.5 fps
  • with 1080p (HD, 3 fps) and 4K (UHD, 5 fps) rtsp streams.

      • Pi4B running Raspbian Buster:
        • 4 HD: ~11.8 fps (basically processing every frame)
        • 5 HD: ~14.7 fps (basically processing every frame)
        • 6 HD: ~15.0 fps, -d 0 (no display) ~16.7 fps
        • 8 HD: ~11.6 fps, -d 0 ~14.6 fps
        • 1 UHD: ~4.6 fps (basically processing every frame)
        • 2 UHD: ~4 fps very high latency makes it useless (Pi4B struggles with decoding 4K streams)
  • with Nano 3 fps HD and UHD rtsp streams.

      • Jetson Nano running Ubuntu 18.04:
        • 5 UHD (4K) : ~14.6 fps (effectively processing every frame!)
        • 5 UHD 3 HD : ~10.3 fps, jumps to ~19.1 fps if -d 0 option used (no live image display)
        • 4 UHD 4 HD : ~16.3 fps, ~22.5 fps with -d 0 option
        • 5 UHD 10 HD (1080p): ~4.4 fps, ~7.6 fps with -d 0 option (totally overloaded, get ~39 fps when running on i7-4500U MiniPC) is the version that supports all the AI options -- CPU, NCS/NCS2, & Coral TPU. only supports uing the Coral TPU for the AI.

Right now if you are looking to buy, I'd get the Jetson Nano and the Coral TPU, you are looking at spending ~$200.

The Wiki on the GitHub has some notes about cameras, POE and DVRs, but its hard to really recommend as many of what I have used are "not currently available" on Amazon, others when I used the "Order Again" were very different firmware despite being nominally the same item.

For less than a Pi4B as AI host, Onvif snapshot cameras are the best choice, but very few camera models deliver the full resolution as Onvif snapshots, most (as do DVRs) only give lame D1 snapshots.

The 720p and 1080p cameras I have that do give full resolution snapshots are "no longer available". Its frustrating that I've found some 4K cameras that give the full 4K snapshot but only appear to do so for rtsp "keyframes" meaing only one snapshot every second or two.

D1 images certainly work and its what I started with, but better resolution makes for a much easier and quicker decision to ignore or act when a push notification comes in. Also, unexpectedly, with MobilenetSSD-v2_coco the sensitivity of detection clearly seems to improve with image resolution, at least up to 4K. With MobilenetSSD-v1 to get "good" sensitivity with some 4 & 5 Mpixel cameras I had access to I needed to "tile" the image into quadrants which of course reduced the frame rate by 1/4.

I think this is a "game changer" for home security and I'd like to make it more accessible for "normal" do-it-your-self-ers, right now you pretty much need a computer geek card.

Its inevitable that this will find its way into cameras and security DVRs, but right now IMHO its premature as the AI and Co-processors are changing fast. For example the NCS performed pretty well with MobilenetSSD-v1 but it struggles with MobilenetSSD-v2_coco which is in my experience a much superior AI. I look forward to what Intel does with the NCS3 that is supposed to come out "soon", they are way behind the Google TPU at the moment, at least for the AI that I've tried.

Please, if you try it and have problems raise issues about the Python code or software installation on the GitHub, questions about the node-red here.

The Node-RED is what makes the Python code (which can be treated mostly as a black-box) useful for adding to your existing home automation/security system.


I made a video of the system running on development system hardware showing 15 rtsp streams (7 UHD, 8 HD) at 3 fps each along with my dashboard controller/viewer in a Chrome window.

AI system in action.

The Python AI is using both Coral TPU and Movidius NCS2. The "retrograde" motion sometimes observed is because the TPU does a higher fps (the TPU can do the full 45 fps on its own) so sometimes two TPU frames are completed followed by an NCS frame which should have been the "frame in the middle" but its processing latency is longer so it is displayed "late".

This video is a "side effect" from a long running test. I believe the TPU is clearly the best choice at present.

While both are running MobilenetSSD-v2_coco, the model is "compiled" differently for each AI stick. The NCS (and OpenVINO CPU) AI have some static object (tree, bushes, etc) false detections that I've never seen with the TPU. This test is to confirm the difference and see if the TPU is really immune or not -- these are transients that happen "when the light is just right" so it can take weeks to months to observe them again, as they tend to come in "bursts' in a certain time window and weather conditions. So conclusion is TBD.

The node-RED filter function node now has a skeleton to remove these static object detections based on the AI "box points".


This topic was automatically closed after 60 days. New replies are no longer allowed.