Standalone Raspberry Pi Security Camera with AI "person detection"


I wonder what the dnn module is doing internally? If the first layer is expecting 300x300 and you send it more does it crop? resize? analyze multiple overlapped 300x300 "tiles"? (which it seems should slow it down a lot). If you send it less, does it interpolate or zero pad? I'm going to see if searching comes up with some info on what is going on internally. I'm kind of surprised it doesn't give errors if the size is not 300x300 or whatever it was trained on. I'm pretty sure the NCS version chokes if the input is not 300x300, although if I recall the NCS SDK v2 api would do an automatic resize if necessary (I didn't pursue it as I only saw different, not better)

I've got some mp4 files (from a real crime at an industrial site) that are particularly hard to detect, I do get some detections but most of the time that the perp is in the frame its not detected. I plan to add the CLAHE to the pre-processing to see if it helps.

Doh! my object oriented stupidity, I searched on Timer not the tmrs objects created.


I've played a bit with CLAHE and pre-processing the images with it before sending to the AI can definitely help. Unfortunately I've found little guidance on how to set its parameters with my googlefu.

Here is an ~10 second clip of that security camera footage I mentioned of an actual crime where the people paid to monitor the cameras totally missed it, the entire event lasted ~30 minutes, the thefts were discovered the next day.

CLAHE enhanced AI performance comparrison

Dual frame view "normally" processed by the MobileNet-SSD AI on the left. On the right is the result of images pre-processed with CLAHE before going to the AI sub-system. The images certainly look better to the naked eye, but I doubt it would have been enough to make a difference to the people who were obviously not paying attention.

If you want to play with this, I'll share my Python code, if requested. It reads from a video file or a "live" camera on /dev/video0 using OpenCV cv2.VideoCapture(). This code uses the Movidius NCS, so as far as I know it will not run on Windows. Analyzing the input file, twice per frame, gave me ~4.5 fps meaning the NCS was processing ~9 fps.


Wow, that is not too bad I think!

(I just noticed your posting now, missed it for some unknown reason)

Clearly, the picture is enhanced on the right side and good enough for DNN detection to be triggered (5 times if I counted correctly). If the security guards would have had an alarm triggered by that, they should have managed to "wake-up". I think this would be a great add-on to their existing video monitoring system

Very good and interesting!


Yes indeed, I'm working with the principle of that company to add AI to their monitoring system. Been a bit of a side-track since their system runs on Windows, but I'm extraordinarily impressed the Python portability between Windows and Linux.

The Movidius NCS is not suported on Windows so I'm going up the OpenVINO learning curve. So far with OpenVINO the NCS2 is showing ~2.5X the frame rate of the original NCS (running on OpenVINO) The OpenVINO code transparently handles the difference between the NCS and NCS2

CPU only AI is a possibility as his systems run on i3 for which CPU only is about the same and with the NCS -- although that is not with his normal workload running.

It been fun, I'm getting some new toys to play with, He's also looking into integrating Lidar - the idea is Lidar detects motion (appearence of a new object in map's background), directs a high power PTZ camera to it and the AI decides if its a person or livestock.

I'm learning alot!


And you are helping them a lot to get a more intelligent systems, it is a joy to see. You know, I was working with and global responsible for large security solutions all my life until I retired two years ago. With what I know today, we could have made magic, enhancing with AI. I had a lot of colleagues also in the US since the company I worked for had operations there as well. But we used systems from Pelco, Genetec, Lenel, Bosch and lately Milestone. All of them had nothing of this intelligence a few years ago but things are moving fast and a lot has happened recent years, I imagine they have now catched up

When you are mentioning OpenVino, is it still dependent on the stick?

I am also thinking how performance can improve. My i5 laptop is doing really good, within a second I have a picture analysed with detected person in Telegram, good enough for a family house I suppose. I assume the RPi's will continue to be faster next generations. I found the Odroid-XU4 (now at $49) could be an interesting candidate, in various benchmarking it is measured to be about 7 times faster than the RPi3, means it would perform at the same level as my laptop. I expect much will happen coming years so we might not need to optimize on the software side for performance reasons, more for accuracy in detection


OpenVINO tries to "unify" the program logic with modules for CPU, Myriad (NCS & NCS2), Intel HD graphics GPU, Movidius VPA, and FPGA. The main difference is the NCS models need to be compiled for fp16 vs fp32 for CPU, I've no experience with the other "inference engine modules"

There are some IMHO backward steps -- no way to probe for now many NCS are installed, no way to get the stick's temperature.

Thanks for the tip about the Odroid, I've just started to investigate "enhanced" RPi type IOT-like small computers.


Another alternative could eventually be ASUS SBC Tinker board S RK3288. However, it is a bit more expensive. Anyway, I finally just ordered the Odroid-XU4 from Hardkernel's web shop in South Korea, I'm curious how it will perform. Let's see how it goes with delivery & time in customs


Hey guys (@krambriw @Wb666greene), I'm still trying to follow this nice discussion with great interest but with VERY little knowledge about the topic. So it would be nice if you guys add from time to time some lightly digestible information for dummies like me. For example, I see that the Odroid-XU4 and the Asus RK3288 are both 'rather' affordable. But is there any particular reason why you have selected those: do they only have more resources compared to Raspberry pi 3, or is there some kind of AI chip onboard? Or do you also need that OpenVINO or other stuff? And so on ...

If one of you guys have an 'affordable' AI running on video surveillance combined with Node-RED, it would be very helpfull if you could explain in a 'share your projects topic how we can achieve something similar step-by-step... Thanks !!!!!

1 Like

I just ordered one of the Odroid-XU4 for the following improvements over the Pi:

USB3 - not such an issue with the original NCS, but the NCS2 is enough faster that using USB2 is a significant reduction in fps (my i3 has both USB and USB3 ports). Also a decent USB3 memory stick is almost as fast as a SATA drive for external storage

Gigabit Ethernet - makes a significant difference if you use SAMBA file sharing. The Pi3B+ is borderline for processing multiple rtsp streams, all the Odroid improvements should help. I'll find out after it arrives.

eMMC - significantly faster read/write speeds

2GB RAM - twice what is on the Pi3B+

I'm getting close to having a system that runs on Pi3, Windows(7&10), and Ubuntu. It depends on Python for the AI and image inputs and uses Node-Red with MQTT broker local to the AI host for notifications and control. It can use the NCS or CPU only AI (minimally useful on the Pi3 ~0.5 fps), and multiple NCS sticks. Often 1 NCS and 1CPU AI thread hit a "sweet spot" in thruput. Unfortunately the NCS is not supported on Windows. OpenVINO will solve this on Windows 10 (but not Win7 apparently).

It can analyze images from "Onvif snapshots" (http://CameraURL/xxx.jpg, meaning if you can type a URL for your camera in Firefox and it returns a jpg image and it returns a new image when the browser window is refreshed it should work), rtsp streams, jpeg images via MQTT message (node red can be ftp server and feed the images to the AI via MQTT) and for playing around it can analyze MP4 input files.

The code resizes images as needed, I've tested it with D1, 720p, 1080p and 4K camera images. 4K is generally too much of a good thing unless you break the image up into sub-tiles and run multiple AI inferences on the tiles. For my purposes 720p is "optimum" but 1080p is good. The MobileNet-SSD AI is done on a 300x300 pixel input.

Once I finish a few minor issues I will share this code on a GitHub and post a message here. After which my efforts will go to moving it to OpenVINO which while more difficult to setup initially, appears to make it much easier to try different AI subsystems (CPU, GPU, NCS2, NCS) and has many more AI models I can try. Some of the sample code I've found on GitHub and modified a bit is getting ~30fps with the NCS2 and ~11fps on the NCS with ~22fps using two NCS on 640x480 images from a webcam. So I'm very encouraged.

Unfortunately nothing more will happen for at least a week after I post this message, as I have a prior commitment and need to put this project aside.

Your node-red rtsp code could easily ship frames to the AI via MQTT buffers, the main issue is MQTT and Node-Red appear to try and buffer everything which will eventually overflow without some way to have it drop frames. My threads read the stream "continuously" and drop frames if the AI input queue is full. Reducing my Lorex DVR from 15fps (the default) to 5fps helped a lot (by not having the threads spinning so much only to mostly drop frames) and is still plenty good enough for my purposes on the 24/7 recordings.

I have discovered that node-red rbe and binary buffers lead to "out of memory" crashes eventually so I'm not sure how to drop frames on the node-red side. This is one of the remaining things on the TODO list -- keep the MQTT input buffer empty and drop frames when necessary if the AI isn't ready for a new one. I think anything that can simplify the image source side of things is worthwhile.

1 Like

Make sure you check out Pete Scargill's blog, he regularly tests different Pi-like SBC's.


It will be fun to try out the Odroid's once they arrive. I have understood they are popular for bitcoin mining

Regarding OpenVino, I found this article, maybe is a good & tempting starting point for the Odroid journey

1 Like

Odroid arrived today, much faster shipping than I'd expected straight from Korea via DHL. Unfortunately I'm babysitting a sick wife :frowning:
so I won't get much if anything done today, but one quick warning I hadn't noticed in the marketing materials:

If you use the Raspberry Pi's GPIO pins the Odroid pins are 1.8V instead of 3.3V as on the Pi. Fortunately they use a different connector so you can't just plug in a device you used with the Pi GPIO, but this is IMHO is an impediment to using it as a general "faster Pi" replacement.

It also uses a 2.1mm "barrel" connector power plug like on a 12VDC security camera -- another easy OOPS to destroy the board.

Plugged everything in and it booted right up, nice to have the eMMC pre-loaded with Ubuntu-Mate 18.04. Very good out of box experience.

I'll report back after I've setup NCS v1 SDK and done some tests with my Python Movidius code. I'm hoping I can get the Python OpenVINO installed on it so I can try the NCS2.

Its weird to see eight CPUs in system monitor :slight_smile:

Thanks for the link, I'm aware OpenVINO is not supported on Ubuntu 18.04, only 16.04, but I'm hoping the Raspbian version will work on the Odroid as its a lot simpler, only the "API" not the model compilers. but I can see a lot of reasons why it might not. I'll find out soon enough.

Here is another "AI at the Edge" Corel from Google, Tensorflow-lite models only, but the USB version is priced to compete with the NCS and the full development board and AI coprocessor running a Debian based Linux is like $150.
Coral Edge TPU

Edit: The USB Coral stick is in stock at Mouser for $75 they had 960 in stock when I ordered. Since it apparently only needs some python libraries installed it should be pretty easy to try and a good alternative for my Odroid if I don' have any success with installing the OpenVINO or NCS v1 SDK on it.


Nice, mine will arrive on Monday with FedEx. I did also order some things at the same time like case, 5V/4A power supply, RTC backup-batteri, 16G eMMC module and a eMMC usb-writer
This will be fun


Darn, I wish I'd noticed the eMMC writer!

I debated about the case, but the I figured it'd keep my hands out of the fan :slight_smile: good thing I got it as the eMMC mounts on the bottom of the board -- not sure how it'd stand up to being on a test bench unprotected. Power supply was a nobrainer, priced right and I didn't have a 5V 4A supply only 2.4A for the Pi.

I noticed a cheap USB to audio dongle so I ordered one. I use espeak-ng to anounce my AI detections so when we are home so I knew the DHL guy was there before he rang the doorbell (signature required).

My wire for the audio is on the long side and picks up a bit of hum, with this dongle I can use a PiZeroW to speak the audio with espeak-ng and an Node-red MQTT interface and remove about 30' of audio cable.


Do you have a link to that?


Its in the link krambriw posted for the Odroid in the other products sections where the eMMC modules, cases, power supplies etc. are. I still had it in my browser history:

USB audio adapter

It was like $3.90, but since its sent straight from Korea it won't be such a good deal after shipping if that is all you order.

I haven't actually tried it yet so I won't give it a ringing endorsement. The package was pretty much "white envelope", it says 5HV2 USBsoundcard in small print on the dongle body.


The "little toy" arrived today. I think the performance is rather impressive, even as a desktop computer it could do for "normal" office work it seems at first glance. It's getting a bit hot when a monitor is attached (I have the heatsink version, no fan) but it will in my case run headless. Already noticed that the heatsink is not getting that hot anymore once monitor is disconnected

1 Like

I thought I should start installing OpenCV in the Ondroid. Found a useful guide and videos about just that topic.

Unless there is a smarter & smoother way...


I had to put mine aside for the time being but initially I got ssh running simply by doing
sudo apt-get install ssh. I did have an issue where ssh -X launched the GUI app on the XU4 destop instead of my ssh host.

But then I always initially start the Pi etc. with a keyboard, mouse and video monitor attached, I don't disable the GUI to really run headless until I'm pretty close to "deployment".

If you don't want to use KVM connections initially it is definitely doing it the hard way IMHO.

A quick look at the github page makes it look like he is compling or cross-compiling openCV.

I recommend starting with:
`pip install opencv-contrib-python

I'll certainly revisit his guthub if that doesn't work The pipy is compiled with some acceleration options that make a significant difference on the Pi:

// for "optimized" build, almost 2X faster on Pi3
    -D CMAKE_INSTALL_PREFIX=/usr/local \
    -D OPENCV_EXTRA_MODULES_PATH=~/opencv_contrib-3.3.0/modules \

I don't know if these work for the XU4.


Some sweat & tears but finally got it working nicely with OpenCV 4.0.1. The thing is able to do DNN analyze on a picture in roughly a second. I reduced the cpu max frequency to 1,9 GHz to reduce the heat a bit. Idle temperature is now 45 degrees Celsius

(Use this command as superuser)
echo 1900000 > /sys/devices/system/cpu/cpu4/cpufreq/scaling_max_freq

(I'm sending pictures for analyze via mqtt)

Pictures copied "as received" from Telegram on my iPhone