Standalone Raspberry Pi Security Camera with AI "person detection"

Nice, mine will arrive on Monday with FedEx. I did also order some things at the same time like case, 5V/4A power supply, RTC backup-batteri, 16G eMMC module and a eMMC usb-writer
This will be fun

Darn, I wish I'd noticed the eMMC writer!

I debated about the case, but the I figured it'd keep my hands out of the fan :slight_smile: good thing I got it as the eMMC mounts on the bottom of the board -- not sure how it'd stand up to being on a test bench unprotected. Power supply was a nobrainer, priced right and I didn't have a 5V 4A supply only 2.4A for the Pi.

I noticed a cheap USB to audio dongle so I ordered one. I use espeak-ng to anounce my AI detections so when we are home so I knew the DHL guy was there before he rang the doorbell (signature required).

My wire for the audio is on the long side and picks up a bit of hum, with this dongle I can use a PiZeroW to speak the audio with espeak-ng and an Node-red MQTT interface and remove about 30' of audio cable.

Do you have a link to that?

Its in the link krambriw posted for the Odroid in the other products sections where the eMMC modules, cases, power supplies etc. are. I still had it in my browser history:

USB audio adapter

It was like $3.90, but since its sent straight from Korea it won't be such a good deal after shipping if that is all you order.

I haven't actually tried it yet so I won't give it a ringing endorsement. The package was pretty much "white envelope", it says 5HV2 USBsoundcard in small print on the dongle body.

The "little toy" arrived today. I think the performance is rather impressive, even as a desktop computer it could do for "normal" office work it seems at first glance. It's getting a bit hot when a monitor is attached (I have the heatsink version, no fan) but it will in my case run headless. Already noticed that the heatsink is not getting that hot anymore once monitor is disconnected

1 Like

I thought I should start installing OpenCV in the Ondroid. Found a useful guide and videos about just that topic.


Unless there is a smarter & smoother way...

I had to put mine aside for the time being but initially I got ssh running simply by doing
sudo apt-get install ssh. I did have an issue where ssh -X launched the GUI app on the XU4 destop instead of my ssh host.

But then I always initially start the Pi etc. with a keyboard, mouse and video monitor attached, I don't disable the GUI to really run headless until I'm pretty close to "deployment".

If you don't want to use KVM connections initially it is definitely doing it the hard way IMHO.

A quick look at the github page makes it look like he is compling or cross-compiling openCV.

I recommend starting with:
`pip install opencv-contrib-python

I'll certainly revisit his guthub if that doesn't work The pipy is compiled with some acceleration options that make a significant difference on the Pi:

// for "optimized" build, almost 2X faster on Pi3
cmake -D CMAKE_BUILD_TYPE=RELEASE \
    -D CMAKE_INSTALL_PREFIX=/usr/local \
    -D OPENCV_EXTRA_MODULES_PATH=~/opencv_contrib-3.3.0/modules \
    -D ENABLE_NEON=ON \
    -D ENABLE_VFPV3=ON \
    -D BUILD_TESTS=OFF \
    -D INSTALL_PYTHON_EXAMPLES=ON \
    -D BUILD_EXAMPLES=ON ..

I don't know if these work for the XU4.

Some sweat & tears but finally got it working nicely with OpenCV 4.0.1. The thing is able to do DNN analyze on a picture in roughly a second. I reduced the cpu max frequency to 1,9 GHz to reduce the heat a bit. Idle temperature is now 45 degrees Celsius

(Use this command as superuser)
echo 1900000 > /sys/devices/system/cpu/cpu4/cpufreq/scaling_max_freq

(I'm sending pictures for analyze via mqtt)

Pictures copied "as received" from Telegram on my iPhone
image1

2 Likes

A little note; beware of the screen saver. Even if running headless, the screensaver jumps in. I realized this since the temperature suddenly ws 80 degrees Celsius. I selected it to blank screen and now it is back to normal 45 degrees again

1 Like

Hey guys,

Interesting !!!

Have you had already a look at the series of new Node-RED contributions for AI ?

image

The are all based (via the node-red-contrib-model-asset-exchange node) on the Model Asset Exchange, which seem to be a collection of open source deep learning models. You can run it on the cloud platform or install the services as a Docker/Kubernetes container on your own hardware at home.

P.S. I haven't tried it yet ...

Bart

2 Likes

I'm not interested in "cloud" solutions, YMMV, but node-red's single thread event loop will be a problem if using multiple cameras, to my mind four cameras is the starting point (90 degree coverage per camera), large or oddly shaped properties could easily require four times that for coverage.

But I might look into it after I get my code up on github as a quicker way to see if "better" models than MobileNet-SSD can get more accurate results with images from real security cameras -- the high camera angles can be a problem for some of the face detection models I've tried. Also performance with IR illumination is really important.

I had to do some benchmarking/comparison...

The time consuming part in the DNN analyze is (in Python code):
detections = net.forward()

Now running exactly the same with the same pictures:

  1. Raspberry Pi3: takes 6.5 seconds
  2. Odroid XU4: takes 1.2 seconds
  3. My laptop: takes 0.12 seconds

So my laptop is still ten times faster. It is by far not the fastest you can get, just an old Intel Core i5 4210U @ 1.70GHz x 2

Anyway, the Odroid is a huge improvement in comparison with running the analyze on a Pi.

1 Like

Indeed I'm also NOT interested in cloud solutions. But you can download their Docker container and start in on your local server at home. No installation issues, you just have to train it. So you have your own local services running at home, as much as you like... Don't known why their Docker containers contain a trained system, because then you were up-and-running in a few minutes with a new service...

Much as I love it, Node-RED is not the answer to everything. If it doesn't work in NR or it is harder there, use a different tool for that part of your solution.

I often liken it to digital duct tape - you can build amazing things with it (hopefully) - though maybe not everything - but then again... https://www.popularmechanics.com/home/interior-projects/how-to/g1058/5-unusual-things-to-make-with-duct-tape-15008249/

4 Likes

I'm hoping the XU4 lets me get the full ~11fps the NCS is capable of (Pi3B+ does ~6.5 fps and adding a second only gets it to ~11.5fps). If OpenVino runs on it, I want to see what the NCS2 can do, I've got some OpenVINO test code where the NCS2 can do ~30fps on an i5 4200U, and 2 NCS sticks gets ~22fps. Mixing an NCS and NCS2 seems to slow things down (or I have a bug to squash, TBD)

I've put this aside until I put up my existing NCS v1SDK / dnn CPU only code on GitHub, Getting close, one last main feature to add, to allow a combination of Onvif http jpeg snapshot cameras and rtsp streaming cameras to be mixed (currently its either or). And to allow two sets of rtsp cameras -- one with round-robin sampling where one thread scans all the cameras on a single base IP address (for things like the cameras attached to my Lorex security DVR) and another where there is one thread per stream where each camera is on its own IP address. The one-thread per camera is needed to keep the whole thing going in case of camera errors -- I can unplug a snapshot camera and it resumes when the camera is plugged in. Haven't tried this yet with the rtsp camera streams.

I think MythBusters set the bar pretty high for creative uses of duct tape :slight_smile:

Just noted the below. Means it should now be possible to run it on the pre-installed that came on the eMMC, no need for re-flashing

1 Like

I've released my "final" version of the Python code on GitHub:
https://github.com/wb666greene/AI_enhanced_video_security/blob/master/README.md

While this thread has been most useful to me, its become unwieldly so let's move any further discussions to this new thread here:
https://discourse.nodered.org/t/final-version-of-security-cameras-with-ai-person-detection/9510/2

I finally got around to it.

This modifies cmake files to compile the OpenVINO, apparently at this point Intel is only supporting Python3.5 on the Pi and doesn't support compiling on ARM

Compiling OpenVINO on various Intel CPUs I've found it can produce incompatible binaries -- building on an i5 4200M segfaults when the binary is installed on an i5 M540, but works when compiled there. The CPU module shows major performance differences, the minor MYRAID performance differences are easily accounted for by the old M540 lacking USB3.

My pre-flashed eMMC came with Mate18 which only has Python3.6. So I made an Odroid Mate16 SD card and was able to get it to run the Pi binaries by hacking on the setupvars.sh script so it acted as if it was running on a Pi. I also had to upgrade to gcc-6 to get the required libraries.

Having got this working and not having the eMMC writer dongle :frowning: I tried getting it going on Mate18 so as to not have wasted the money on the eMMC :slight_smile:

Basically did same hack on setupvars.sh, the correct gcc-6 libs were already installed, but I had to "side load" Python 3.5.2 and setup a python virtual environment. But it is working fine now.

With the OpenCV 4.01_openvino Intel provides, it was trivial to port this code (and your dnn module sample) to use OpenVINO, with the information in this tutorial:

Running my GitHub released code with the minor changes required, gave me the following results:

Using 5 Onvif snaphot netcams.
Pi3B+:
   NCS v1 SDK ~6.5 fps
   2 NCS v1 SDK ~11.6 fps
   NCS OpenVINO ~5.9 fps
   2 NCS OpenVINO ~9.9 fps
   NCS2 OpenVINO ~8.3 fps

Odroid XU-4:
  Mate16:
    NCS OpenVINO ~8.5 fps
    2 NCS OpenVINO ~15.9 fps
    NCS2 OpenVINO ~15.5 fps
  Mate18:
    NCS2 OpenVINO ~14.6 fps

So the NCS2 looks to be almost to 2X faster for this application. And the Odroid XU-4 is almost 2X faster than the Pi3B+ with the NCS2

But a decent i5 4200M is faster than the ARM processors with one CPU AI thread and about as fast as one NCS2 thread on the i5, ~20 fps. Using 1 NCS2 and 1 CPU makes little difference.

So if you are looking to setup such a system, I think looking for a "refurb" i5 4200 or better laptop or "compact desktop" could be more cost effective than a Pi or Odroid and an NCS stick. Power and size issues could tilt the balance. My i5 system has a 60W power supply, the Odroid 20W, and the Pi3B+ 12.5W

And of course Intel just release a new version of OpenVINO a few weeks ago, not clear yet if any of the improvements apply to the Python API or the ARM support.
https://software.intel.com/en-us/blogs/2019/04/02/improved-parallelization-extended-deep-learning-capabilities-in-intel-distribution

1 Like

Hey have you tried a Vizi-AI https://www.hackster.io/mark93/computer-vision-powered-disco-ff90f3#schematics