Greetings,
I am trying to solve the problem of false motion alarms using Node-Red, and one of the ways I want to do that is with locally hosted AI.
This is my use case.
I live on a farm, and have experienced four burglaries this year. We lost chainsaws, brush cutters, spanner sets, power tools, the list breaks my heart, it really does.
We're OK, because some of it was insured, and also we're farmers, so we'll figure this out with what we have. It's kind of what farming is, if you're not corporate.
In any case, what I need is valid early warning, and Node-RED is the tool, because I have a whole bunch of disparate systems and some bespoke stuff too, and I have to glue it together because there's no budget for anything beyond the eight new cameras I bought.
I am the A-Team, locked in a shed with everything I need, I just have to figure out how. Here we go.
Essentially what we are doing is presence detection, and we are improving it with AI. I would never use this method inside any room or area that I consider to be private, because just no. Yes to patios, no to bedrooms.
AI person detection is a game changer because it removes any doubt that human is present. It does, however, require processing, and if this is going to be a thing I can share with my neighbours if I get it working, well, they can't buy GPUs either. GPUs also eat power, which is a hard thing if you run off an inverter (the grid is a joke out here).
So the gig is to integrate AI in a way that is reliable, yet super light on processor. My thinking is that it has to be possible to do AI human detection with still images instead of live stream. That way, I only have to ask an AI server if this 720p jpg contains a humanoid, and even then, only if a bunch of other stuff says {"state"= true} first.
Because I am limited to what I have, I will be dedicating a refurb Lenovo Thinkcentre M93P Tiny, which has an i7-4785T Octa-Core CPU and 16GB of RAM to being an AI server.
So step one is get every motion sensor or line crossing sensor or whatever the hell I have into Node-RED, which I've done in a sane way using MQTT topics.
The logic flow is to use a Bayesian filter (thank you @mixu_78) to decide if the motion warrants verification. I'm still fiddling with functions to make it work; it looks promising.
For each room/space, I'm using an array stored in global context. When a new presence event comes in (looking at mmwave too with great interest), the global array is updated with the new value, and then the entire array is passed to the bayesian filter, which has the relevant conditions in place, YMMV.
If the filter says the room is occupied when it shouldn't be, at that point I'm using a HikvisionUltimate node to pull a jpg. The idea is to then send it to an AI server somehow, and get at least a boolean value back.
If it's a person, wake me the **** up with a voice notification and a strobe or something, so I can radio my neighbour and get my ass down to the tractor shed with a paintball gun loaded with pepper gas rounds, instead of watching the video the next day and appreciating how professional the thieves are.
My neigbour has a million guns and a great need to fire them, and on some occasions that's a help. We drive each other's farms at night during picking season; it's like that.
Anyhow, I'm hoping that there are other scrappy people like me trying to make it happen with duct tape. Let's figure it out together. I would love to hear your ideas