Standalone Raspberry Pi Security Camera with AI "person detection"

A little note; beware of the screen saver. Even if running headless, the screensaver jumps in. I realized this since the temperature suddenly ws 80 degrees Celsius. I selected it to blank screen and now it is back to normal 45 degrees again

1 Like

Hey guys,

Interesting !!!

Have you had already a look at the series of new Node-RED contributions for AI ?

image

The are all based (via the node-red-contrib-model-asset-exchange node) on the Model Asset Exchange, which seem to be a collection of open source deep learning models. You can run it on the cloud platform or install the services as a Docker/Kubernetes container on your own hardware at home.

P.S. I haven't tried it yet ...

Bart

2 Likes

I'm not interested in "cloud" solutions, YMMV, but node-red's single thread event loop will be a problem if using multiple cameras, to my mind four cameras is the starting point (90 degree coverage per camera), large or oddly shaped properties could easily require four times that for coverage.

But I might look into it after I get my code up on github as a quicker way to see if "better" models than MobileNet-SSD can get more accurate results with images from real security cameras -- the high camera angles can be a problem for some of the face detection models I've tried. Also performance with IR illumination is really important.

I had to do some benchmarking/comparison...

The time consuming part in the DNN analyze is (in Python code):
detections = net.forward()

Now running exactly the same with the same pictures:

  1. Raspberry Pi3: takes 6.5 seconds
  2. Odroid XU4: takes 1.2 seconds
  3. My laptop: takes 0.12 seconds

So my laptop is still ten times faster. It is by far not the fastest you can get, just an old Intel Core i5 4210U @ 1.70GHz x 2

Anyway, the Odroid is a huge improvement in comparison with running the analyze on a Pi.

1 Like

Indeed I'm also NOT interested in cloud solutions. But you can download their Docker container and start in on your local server at home. No installation issues, you just have to train it. So you have your own local services running at home, as much as you like... Don't known why their Docker containers contain a trained system, because then you were up-and-running in a few minutes with a new service...

Much as I love it, Node-RED is not the answer to everything. If it doesn't work in NR or it is harder there, use a different tool for that part of your solution.

I often liken it to digital duct tape - you can build amazing things with it (hopefully) - though maybe not everything - but then again... https://www.popularmechanics.com/home/interior-projects/how-to/g1058/5-unusual-things-to-make-with-duct-tape-15008249/

4 Likes

I'm hoping the XU4 lets me get the full ~11fps the NCS is capable of (Pi3B+ does ~6.5 fps and adding a second only gets it to ~11.5fps). If OpenVino runs on it, I want to see what the NCS2 can do, I've got some OpenVINO test code where the NCS2 can do ~30fps on an i5 4200U, and 2 NCS sticks gets ~22fps. Mixing an NCS and NCS2 seems to slow things down (or I have a bug to squash, TBD)

I've put this aside until I put up my existing NCS v1SDK / dnn CPU only code on GitHub, Getting close, one last main feature to add, to allow a combination of Onvif http jpeg snapshot cameras and rtsp streaming cameras to be mixed (currently its either or). And to allow two sets of rtsp cameras -- one with round-robin sampling where one thread scans all the cameras on a single base IP address (for things like the cameras attached to my Lorex security DVR) and another where there is one thread per stream where each camera is on its own IP address. The one-thread per camera is needed to keep the whole thing going in case of camera errors -- I can unplug a snapshot camera and it resumes when the camera is plugged in. Haven't tried this yet with the rtsp camera streams.

I think MythBusters set the bar pretty high for creative uses of duct tape :slight_smile:

Just noted the below. Means it should now be possible to run it on the pre-installed that came on the eMMC, no need for re-flashing

1 Like

I've released my "final" version of the Python code on GitHub:
https://github.com/wb666greene/AI_enhanced_video_security/blob/master/README.md

While this thread has been most useful to me, its become unwieldly so let's move any further discussions to this new thread here:
https://discourse.nodered.org/t/final-version-of-security-cameras-with-ai-person-detection/9510/2

I finally got around to it.

This modifies cmake files to compile the OpenVINO, apparently at this point Intel is only supporting Python3.5 on the Pi and doesn't support compiling on ARM

Compiling OpenVINO on various Intel CPUs I've found it can produce incompatible binaries -- building on an i5 4200M segfaults when the binary is installed on an i5 M540, but works when compiled there. The CPU module shows major performance differences, the minor MYRAID performance differences are easily accounted for by the old M540 lacking USB3.

My pre-flashed eMMC came with Mate18 which only has Python3.6. So I made an Odroid Mate16 SD card and was able to get it to run the Pi binaries by hacking on the setupvars.sh script so it acted as if it was running on a Pi. I also had to upgrade to gcc-6 to get the required libraries.

Having got this working and not having the eMMC writer dongle :frowning: I tried getting it going on Mate18 so as to not have wasted the money on the eMMC :slight_smile:

Basically did same hack on setupvars.sh, the correct gcc-6 libs were already installed, but I had to "side load" Python 3.5.2 and setup a python virtual environment. But it is working fine now.

With the OpenCV 4.01_openvino Intel provides, it was trivial to port this code (and your dnn module sample) to use OpenVINO, with the information in this tutorial:

Running my GitHub released code with the minor changes required, gave me the following results:

Using 5 Onvif snaphot netcams.
Pi3B+:
   NCS v1 SDK ~6.5 fps
   2 NCS v1 SDK ~11.6 fps
   NCS OpenVINO ~5.9 fps
   2 NCS OpenVINO ~9.9 fps
   NCS2 OpenVINO ~8.3 fps

Odroid XU-4:
  Mate16:
    NCS OpenVINO ~8.5 fps
    2 NCS OpenVINO ~15.9 fps
    NCS2 OpenVINO ~15.5 fps
  Mate18:
    NCS2 OpenVINO ~14.6 fps

So the NCS2 looks to be almost to 2X faster for this application. And the Odroid XU-4 is almost 2X faster than the Pi3B+ with the NCS2

But a decent i5 4200M is faster than the ARM processors with one CPU AI thread and about as fast as one NCS2 thread on the i5, ~20 fps. Using 1 NCS2 and 1 CPU makes little difference.

So if you are looking to setup such a system, I think looking for a "refurb" i5 4200 or better laptop or "compact desktop" could be more cost effective than a Pi or Odroid and an NCS stick. Power and size issues could tilt the balance. My i5 system has a 60W power supply, the Odroid 20W, and the Pi3B+ 12.5W

And of course Intel just release a new version of OpenVINO a few weeks ago, not clear yet if any of the improvements apply to the Python API or the ARM support.
https://software.intel.com/en-us/blogs/2019/04/02/improved-parallelization-extended-deep-learning-capabilities-in-intel-distribution

1 Like

Hey have you tried a Vizi-AI https://www.hackster.io/mark93/computer-vision-powered-disco-ff90f3#schematics