Playing with Jetson Nano - inference analyzis

So I have struggled a bit the last days with the Nvidia Jetson Nano

This little board is very fast in analyzing images and real time video. My application is rather simple, I forward images to the Nano Jetson from my Motion video system when a motion is detected. If someone, a person, is entering our premises when we are away or at home with our alarm system armed, I get notified on my iPhone. This setup works already since long, I have tested it running in an old laptop (with Debian), Odroids XU4 & N2 as well as RPi3+

Now it was time for the Nano Jetson adventure!

To start, a lot needs to be installed, I decided to follow " Hello AI World" guide where there are plenty of good examples (in python) that helped me to write the final solution I needed


I had to struggle quit a bit; the Jetson seems very sensitive to what power supply you use, if you have a monitor connected or not etc etc, really not as forgiving as a Pi. At the moment it is running but I do not know yet how it will work in the long term

Anyway, some timing comparisons could be of interest. If I forward the same image to my various platforms, I get the following readings for how long a successful object detection analyze takes (detecting persons in the image):

Odroid N2: 3.243597 seconds
My laptop: 0.786970 seconds
Jetson Nano: 0.07310032844543457 seconds

The Jetson, using GPU, is roughly ten times faster than my laptop!! Thats good!

For my application, it is not necessary to have such a fast processing but for real time video analytics, I believe this is very interesting (I have however not played with that part)

Below the image I used for testing. This is a tricky image and many models fail to detect me walking there. But both the Jetson and the others (Yolo V3) managed this well

captured51

3 Likes

Thanks for posting this. I've found the Nano to be the best <$150 IOT class computer for using the Coral TPU because it can handle more rtsp streams. This includes the Coral Mendel development board. I've used none of the "JetPack" stuff except for OpenCV, everything else is just Python3, the TPU Python support and node-red for the UI and control via the dashboard.

I just got another Nano to setup for the "pure" Jetson experience. You've given me a wonderful time saving place to start.

The immediate downside I see is the available models is really limited compared to OpenVINO and the TPU to a lessor extent. For instance I don't see a "Pose Estimation" model available for it.

I've been running my AI for over a year and collected a fair number of "false positive" images from MobileNet-SSD v1 & v2 with 15 various outdoor cameras with resolutions from D1 to UHD (4K). Using a Pose Estimation AI on the TPU as a second verification step would have rejected all these false positives when fed my collection of false positive images. Downside is it would increase the false negative rate. Seems to be the higher the camera angle and the more the person fills the frame the less likely are pose keypoints of sufficient confidence found. I'm investigating this. So far the bulk of the false negative are from cameras in more protected locations (patio, porch, garage) that have not given any false positives since I upgraded to MoblleNet-SSD v2. These confined areas necessarily make the viewing angle steeper and the person fill more of the frame.

For grins, I ran your image through it and it would have been a false negative. Not unexpected because of the high camera angle.

I think this is one of the largest issues we face as cameras need to be mounted high to avoid vandalism -- which makes we question the real world practicality of Arlo, Nest, etc. battery powered WiFi cameras, if the (expensive) cameras are mounted high enough to avoid theft, its gonna be a PITA to be changing batteries every few months.

My biggest surprise so far is that UHD cameras appear to improve the AI detection sensitivity which was totally unexpected given that the 3820x2160 image is resized to 300x300 for the AI. I ran a UHD and HD camera mounted adjacent to each other to get as close to the same field of view for each camera as I could the UHD camera detected people in more frames and further from the camera, regularly getting them well beyond my interest being on my neighbor's sidewalk across the street! So now I have to add "region masking" to filter valid detection that I don't want notifications of.

Here is a detection and verification of my mailman leaving that is at about the limit of where I want notifications. I have to reject all the ones from the horizontal sidewalk and across the street. Didn't have this problem with D1 and HD camera images:

1 Like

Very, very nice indeed!!! And great resolution too. You are 100% right about the cameras viewing angle, and the price for those outdoor wireless type nest, a thief knowing something would be more interested stealing those instead of breaking into the garage (or car)
We have a lot to talk about!

I was slow to upgrade to 4K UHD cameras because I though the AI would be wouldn't work well with it. Using some 3 & 4 Mpixel NetCams with MobileNetSSD-v1 it sure looked like the extra resolution made detection less sensitive.

It turned out that shortly after I started running MobileNetSSD-v2 my Lorex DVR died. It was too hot to consider going up in the attic to pull new cables so I had to get a compatible "analog" replacement (called MPX, these days, basically suports all the "analog" security camera formats) DVR and figured, what the hey, get one that also supports 4K, Costco had a 4K "analog" camera for ~$90 so I tried one and was blown away by the improvement in the AI detection. Totally unexpected! I now have 5 UHD cameras in operation and 10 1080p cameras.

I think we are all benefiting by sharing ideas and results. I'm willing to share my code with anyone who is interested.

2 Likes