Cctv recording and playback (work in progress)

It's working

The save the files in my working directory.

But was, please help me

How do I choose which file I would like to see?

And send this file to ui_mp4frag ?

What is the difference between StaticPath and StaticRoot?

and Why does node red need to have entries in staticHTTP?

Colleague, could you kindly share this code? (If possible?)

You can have mine as it is, only using RedBull stream but it works fine. You have to modify paths according to your settings. I keep the recordings listed in one single text file as mentioned above. You have to make some recordings first and the drop down will be populated

cctv_recordings_krambriw.json (40.4 KB)

1 Like

The path is referring to the file location on your machine, such as /var/www/, whereas the root is the describing how it is shown in the url when served via http. node-red will take your settings and then use express static to simply add that directory to be servable to the internet via http.

p.s.

Looking at the code in the recording subflow, the path tells me where I can write the files using the fs writestream and the root is used to format the url for the playlist that is output and passed to the ui_mp4frag so it can know how to play the video.

I have been experimenting with using basic motion detection (ffmpeg -> pipe2pam -> pam-diff -> pixel-change) to trigger recordings instead of 24/7 recording lately and it seems to be working pretty good. The term motion detection is not really accurate since there is no object tracking, but instead just a simple pixel comparison to see if the scene has changed in any significant way. It is using the same libraries that shinobi cctv and scrypted have been using and I haven't been made aware of any problems. I am trying to bring it to node-red as a function subflow for now.

Of course, this is not necessary because you can probably figure out how to setup a socket server and have your ip camera send its own builtin motion detection events to it, but that is outside the scope of what this function node will be doing. It intends to be a more universal way to get pixels from any video feed so that they can be compared and measured.

My personal setup is that I run the detection on the sub stream of a cam and use that to trigger a recording of the main stream.

If anybody is interested, I can start a new thread and we can have a discussion.

Screen Shot 2023-08-15 at 10.36.01 AM

2 Likes

I assume that is again a rhetorical question :wink:
Bring it on...

I am interested in things like e.g.:

  1. Why you use PAM?
  2. How you can use simple pixel comparison for liable motion detection. I have always thought that this is very difficult. E.g. when a dark cloud moves in front of the sun, suddenly all your pixels will change.
  3. Do you use a sub-stream with a low frame rate?
  4. And so on...
2 Likes

pam (portable arbitrary map) is a very simple to parse image format that consists of a small header describing the contents, followed by an uncompress array of pixels. The 3 formats i support are grayscale (1 byte per pixel), rgb (3 bytes per pixel), and rgba (4 bytes per pixel). I prefer and also suggest to use grayscale to save on memory allocation since it can be 1/3 the size of the rgb format.

P7
WIDTH 227
HEIGHT 149
DEPTH 3
MAXVAL 255
TUPLTYPE RGB
ENDHDR
// followed by pixel data

The 2 main settings I use for determining if pixel-change has occurred is difference and percent. The difference is a pixel to pixel comparison value. When testing the pixels between 2 separate image buffers, it is simply a matter of subtracting the value of the pixel from one buffer to the corresponding pixel of the other buffer. The values are all uint8, so they range from 0 - 255 in value. You can set the difference value within that range to determine how and when a pixel is logged as being different. The percent value (0.0 - 100) helps to act as a trigger so when enough pixels have been noticed as being different, it can output the event.

The other way to help minimize shadows is by using the grayscale pam format. It already has flattened a bit of the extra data in the pixels that may indicate brightness.

And yet another way is to draw a polygon over an area of pixels to use as a mask so that those pixels are ignored completely from any calculations. Perhaps in an area where there is that tree branch that keeps swinging in the breeze causing its shadow to move around?

So, use a modest difference value and use grayscale pam format.

i do use a low frame rate from the sub stream, but that is mainly to make it easier for ffmpeg to decode the h264 rtsp video stream. I use hardware acceleration decoding with ffmpeg on the raspberry pi, but it is limited on how big the input resolution can be. Also, I suggest only outputting a small pam image size, maybe only 300 pixels wide and at a framerate of no higher than 2 fps, unless you have a powerful system. If you can have ffmpeg output an image size that can fit within a single pipe chunk (65536 on 64bit linux), it can help to minimize extra memory allocations on the node.js side.

Well, I am currently working on a pam-diff configurator that can be used to test your pam-diff settings on your video stream to help visualize what is happening with various settings. It will not be part of node-red, but it can be used to generate a configuration json object that can be copy/pasted into the subflow in your node-red box.

3 Likes

Thanks Kevin for your description!

New question based on your last screenshot: you know only which pixels have changed. How do you determine which of these pixels belong together to determine their bounding boxes?

Just wondering because these bounding boxes are interesting to ignore pixels that have enough difference, but that don't have enough neighbour pixels that have changed. Such isolated pixels or small pixel groups are not part of a large moving object...

I assume this is not an easy task, because you might have overlapping objects and so on.

I also think & love this type of science so for sure, make it a new thread or continue the discussion here (it is anyway somehow related)

In addition, I just would like to add that in praxis I already do the same today; as you may remember, my cameras are non-intelligent usb types so I use Motion software (running in RPi's) to uplift their capability a bit

Motion sends images for recording when motion is detected according to configurable rules, number of pixel changes, light changes etc etc. It can also use masks (images prepared to exclude certain areas of the view) to reduce "false alarms" but instead I do the actual masking in a separate Python script where I also do an object identification. The image that is being analyzed for object detection is therefore the original image from the camera, masked with a dedicated camera specific masking image. This is done using cv2 and the code is very simple, using cv2.bitwise_and. It works very well and is simple to implement. In this way I am able to avoid false alarms from trees moving in the wind and preventing people from being detected when they are passing by on the public sidewalk outside our premisis

https://docs.opencv.org/3.4/d0/d86/tutorial_py_image_arithmetics.html

1 Like

If the response type is set to "blobs" instead of the default, then extra filtering is done to determine if the changed pixels are connected up, down, left, right using some old C file that I found online and slightly modified.

Here is another screenshot showing using a mask to ignore the ticking clock in the top right corner while leaving the difference value set to a value too low which is causing it to accidentally detect too much change. Keeping the settings like this would definitely impact performance, but this is just for illustration purposes. Apparently, the pixels from the camera are not always exact, thus we must learn to ignore the small things that would be falsely triggering the alarm.

Yeah, i almost made the new thread, but then it seemed to spammy to do so.

When I generate my masks, or conversely the regions of interest, on the js side i create a buffer of 0's and 1's for each pixel coordinate and pass that to the c++ side where it is converted to a bit array (instead of byte array) and keep that along the ride when iterating the pixels to determine if it should be measured or not.

It seems that your implementation and mine are doing the same thing (except for your object detection). But, I do include the bounding boxes in my results so that if the pixels do need to get sent to some other script for that, it can be told in what area to look for any expected object.

To be fair, open cv seemed too big and complicated for me, so that is why i made my code. Also mine is working on windows, mac, linux with having prebuilt binaries ready for distribution for some platforms negating the need for the user to have developer tools installed needed for compiling.

Here is a final screenshot of the pam-diff-configurator showing the measurement of time for doing basic pixel change detection.


Mostly less than 1 millisecond to analyze after this is passed to the c++ side where it is run on one of node.js' worker threads. (on my old mac)

3 Likes

This is an early version of the subflow, in case anybody wanted to take a peek at the design. It uses a function node that will probably trigger the installation of 2 npm packages, pipe2pam and pam-diff. Honestly, i do not know if copy/pasting a flow that includes dependencies will simply work or not. it might depend on your configuration of node-red.

[{"id":"8715f9071f66888f","type":"subflow","name":"pam-diff","info":"# pam-diff\nMeasure pixel differences in consecutive pam images.\n\n### config\nThe configuration object passed to pam-diff for fine-grained pixel difference detection.\n\n### message\nThe output message sent if pixel difference is detected.\n\n### debug\nAdds timing output to the status.","category":"cctv","in":[{"x":60,"y":100,"wires":[{"id":"8b41705ef0246d2f"}]}],"out":[{"x":580,"y":60,"wires":[{"id":"8b41705ef0246d2f","port":0}]}],"env":[{"name":"CONFIG","type":"json","value":"{\"difference\":35,\"percent\":0.1,\"response\":\"blobs\"}","ui":{"label":{"en-US":"config"},"type":"input","opts":{"types":["json"]}}},{"name":"MSG","type":"json","value":"{\"action\":{\"command\":\"start\",\"subject\":\"write\",\"preBuffer\":3,\"timeLimit\":10000,\"repeated\":false}}","ui":{"label":{"en-US":"message"},"type":"input","opts":{"types":["json"]}}},{"name":"DEBUG","type":"bool","value":"true","ui":{"label":{"en-US":"debug"},"type":"input","opts":{"types":["bool"]}}}],"meta":{"module":"pam-diff","type":"pam-diff","version":"0.1.0","author":"Kevin Godell <kevin.godell@gmail.com>","desc":"pam-diff","keywords":"pipe2pam, pam-diff, pixel-change","license":"MIT"},"color":"#DEBD5C","inputLabels":["buffer"],"outputLabels":["message"],"icon":"font-awesome/fa-eye","status":{"x":540,"y":140,"wires":[{"id":"8b41705ef0246d2f","port":1}]}},{"id":"8b41705ef0246d2f","type":"function","z":"8715f9071f66888f","name":"pipe2pam ➔ pam-diff","func":"const { payload } = msg;\n\nif (Buffer.isBuffer(payload)) {\n\n  const { pp } = context.get('md');\n\n  pp.write(payload);\n\n} else {\n\n  const { pp, pd } = context.get('md');\n\n  pp.reset();\n\n  pd.reset();\n\n}\n\nnode.done();","outputs":2,"noerr":0,"initialize":"try {\n\n  const config = env.get('CONFIG');\n\n  const debug = env.get('DEBUG');\n\n  const msg = env.get('MSG');\n\n  config.debug = debug;\n\n  const pp = new Pipe2Pam({ pool: 2 });\n\n  const pd = new PamDiff(config);\n\n  pp.on('error', error => {\n\n    node.error(error);\n\n  });\n\n  pd.on('error', error => {\n\n    node.error(error);\n\n  });\n\n  pd.on('initialized', data => {\n\n    const { width, height, depth, tupltype } = data;\n\n    const text = `${width} x ${height} x ${depth}, ${tupltype}`;\n\n    node.send([null, { payload: { fill: 'green', shape: 'dot', text } }]);\n\n  });\n\n  pd.on('data', data => {\n\n    if (data.trigger.length > 0) {\n\n      node.send(msg);\n\n    }\n\n    if (data.debug) {\n\n      const { name, count, duration } = data.debug;\n\n      const text = `${name}-${count}: ${duration}ms`;\n\n      node.send([ null, { payload: { fill: 'green', shape: 'dot', text } } ]);\n\n    }\n\n  });\n\n  pd.on('reset', () => {\n\n    node.send([null, { payload: { fill: 'yellow', shape: 'dot', text: 'reset' } }]);\n\n  });\n  \n  pp.pipe(pd);\n  \n  context.set('md', { pp, pd });\n\n} catch (error) {\n\n  node.error(error);\n\n}","finalize":"const md = context.get('md');\n\nconst { pp, pd } = md;\n\npp.unpipe(pd);\n\npp.removeAllListeners('error');\n\npd.removeAllListeners('error');\n\npd.removeAllListeners('initialized');\n\npd.removeAllListeners('data');\n\npd.removeAllListeners('reset');\n\npp.reset();\n\npd.reset();\n\npp.destroy();\n\npd.destroy();\n\nmd.pp = undefined;\n\nmd.pd = undefined;\n\ncontext.set('md', undefined);","libs":[{"var":"Pipe2Pam","module":"pipe2pam"},{"var":"PamDiff","module":"pam-diff"}],"x":300,"y":100,"wires":[[],[]]}]

Extra params must be added to ffmpeg so that it gives the correct output type to pipe into this subflow:

"-f",
"image2pipe",
"-c",
"pam",
"-pix_fmt",
"gray",
"-vf",
"fps=2,scale=300:-1",
"pipe:5"

And here is an excerpt from one of my flows that shows the ffmpeg wired to the subflow:

[{"id":"8715f9071f66888f","type":"subflow","name":"pam-diff","info":"# pam-diff\nMeasure pixel differences in consecutive pam images.\n\n### config\nThe configuration object passed to pam-diff for fine-grained pixel difference detection.\n\n### message\nThe output message sent if pixel difference is detected.\n\n### debug\nAdds timing output to the status.","category":"cctv","in":[{"x":60,"y":100,"wires":[{"id":"8b41705ef0246d2f"}]}],"out":[{"x":580,"y":60,"wires":[{"id":"8b41705ef0246d2f","port":0}]}],"env":[{"name":"CONFIG","type":"json","value":"{\"difference\":35,\"percent\":0.1,\"response\":\"blobs\"}","ui":{"label":{"en-US":"config"},"type":"input","opts":{"types":["json"]}}},{"name":"MSG","type":"json","value":"{\"action\":{\"command\":\"start\",\"subject\":\"write\",\"preBuffer\":3,\"timeLimit\":10000,\"repeated\":false}}","ui":{"label":{"en-US":"message"},"type":"input","opts":{"types":["json"]}}},{"name":"DEBUG","type":"bool","value":"true","ui":{"label":{"en-US":"debug"},"type":"input","opts":{"types":["bool"]}}}],"meta":{"module":"pam-diff","type":"pam-diff","version":"0.1.0","author":"Kevin Godell <kevin.godell@gmail.com>","desc":"pam-diff","keywords":"pipe2pam, pam-diff, pixel-change","license":"MIT"},"color":"#DEBD5C","inputLabels":["buffer"],"outputLabels":["message"],"icon":"font-awesome/fa-eye","status":{"x":540,"y":140,"wires":[{"id":"8b41705ef0246d2f","port":1}]}},{"id":"8b41705ef0246d2f","type":"function","z":"8715f9071f66888f","name":"pipe2pam ➔ pam-diff","func":"const { payload } = msg;\n\nif (Buffer.isBuffer(payload)) {\n\n  const { pp } = context.get('md');\n\n  pp.write(payload);\n\n} else {\n\n  const { pp, pd } = context.get('md');\n\n  pp.reset();\n\n  pd.reset();\n\n}\n\nnode.done();","outputs":2,"noerr":0,"initialize":"try {\n\n  const config = env.get('CONFIG');\n\n  const debug = env.get('DEBUG');\n\n  const msg = env.get('MSG');\n\n  config.debug = debug;\n\n  const pp = new Pipe2Pam({ pool: 2 });\n\n  const pd = new PamDiff(config);\n\n  pp.on('error', error => {\n\n    node.error(error);\n\n  });\n\n  pd.on('error', error => {\n\n    node.error(error);\n\n  });\n\n  pd.on('initialized', data => {\n\n    const { width, height, depth, tupltype } = data;\n\n    const text = `${width} x ${height} x ${depth}, ${tupltype}`;\n\n    node.send([null, { payload: { fill: 'green', shape: 'dot', text } }]);\n\n  });\n\n  pd.on('data', data => {\n\n    if (data.trigger.length > 0) {\n\n      node.send(msg);\n\n    }\n\n    if (data.debug) {\n\n      const { name, count, duration } = data.debug;\n\n      const text = `${name}-${count}: ${duration}ms`;\n\n      node.send([ null, { payload: { fill: 'green', shape: 'dot', text } } ]);\n\n    }\n\n  });\n\n  pd.on('reset', () => {\n\n    node.send([null, { payload: { fill: 'yellow', shape: 'dot', text: 'reset' } }]);\n\n  });\n  \n  pp.pipe(pd);\n  \n  context.set('md', { pp, pd });\n\n} catch (error) {\n\n  node.error(error);\n\n}","finalize":"const md = context.get('md');\n\nconst { pp, pd } = md;\n\npp.unpipe(pd);\n\npp.removeAllListeners('error');\n\npd.removeAllListeners('error');\n\npd.removeAllListeners('initialized');\n\npd.removeAllListeners('data');\n\npd.removeAllListeners('reset');\n\npp.reset();\n\npd.reset();\n\npp.destroy();\n\npd.destroy();\n\nmd.pp = undefined;\n\nmd.pd = undefined;\n\ncontext.set('md', undefined);","libs":[{"var":"Pipe2Pam","module":"pipe2pam"},{"var":"PamDiff","module":"pam-diff"}],"x":300,"y":100,"wires":[[],[]]},{"id":"852a5eb714a26cce","type":"inject","z":"31ea7a9082aeede8","name":"start","props":[{"p":"action","v":"{\"command\":\"start\"}","vt":"json"}],"repeat":"","crontab":"","once":true,"onceDelay":"5","topic":"","x":190,"y":140,"wires":[["8527a444c9894280"]]},{"id":"8527a444c9894280","type":"ffmpeg","z":"31ea7a9082aeede8","name":"","outputs":6,"cmdPath":"","cmdArgs":"[\"-hide_banner\",\"-loglevel\",\"+level+fatal\",\"-nostats\",\"-hwaccel\",\"auto\",\"-c:v\",\"h264_v4l2m2m\",\"-stimeout\",\"20000000\",\"-rtsp_transport\",\"tcp\",\"-i\",\"SECRET\",\"-c:a\",\"copy\",\"-c:v\",\"copy\",\"-f\",\"mp4\",\"-movflags\",\"+frag_keyframe+empty_moov+default_base_moof\",\"-metadata\",\"title=front corner sub\",\"pipe:1\",\"-progress\",\"pipe:3\",\"-f\",\"image2pipe\",\"-c\",\"mjpeg\",\"-vf\",\"fps=fps=1\",\"pipe:4\",\"-f\",\"image2pipe\",\"-c\",\"pam\",\"-pix_fmt\",\"gray\",\"-vf\",\"fps=2,scale=300:-1\",\"pipe:5\"]","cmdOutputs":5,"killSignal":"SIGTERM","x":460,"y":200,"wires":[["b7854ccefe5439b5"],[],[],[],[],["b7854ccefe5439b5"]],"icon":"@kevingodell/node-red-mp4frag/bmc-logo.svg"},{"id":"cb169dd2047ebfe2","type":"inject","z":"31ea7a9082aeede8","name":"restart","props":[{"p":"action","v":"{\"command\":\"restart\"}","vt":"json"}],"repeat":"","crontab":"","once":false,"onceDelay":"1","topic":"","x":190,"y":200,"wires":[["8527a444c9894280"]]},{"id":"4355be8825921f7d","type":"inject","z":"31ea7a9082aeede8","name":"stop","props":[{"p":"action","v":"{\"command\":\"stop\"}","vt":"json"}],"repeat":"","crontab":"","once":false,"onceDelay":0.1,"topic":"","x":190,"y":260,"wires":[["8527a444c9894280"]]},{"id":"b7854ccefe5439b5","type":"subflow:8715f9071f66888f","z":"31ea7a9082aeede8","name":"","env":[{"name":"CONFIG","value":"{\"difference\":35,\"percent\":0.6,\"response\":\"blobs\"}","type":"json"},{"name":"MSG","value":"{\"action\":{\"command\":\"start\",\"subject\":\"write\",\"preBuffer\":2,\"timeLimit\":8000,\"repeated\":false}}","type":"json"},{"name":"DEBUG","value":"false","type":"bool"}],"x":720,"y":200,"wires":[[]]}]

In the near future, I will try to come up with a sample flow that includes a video feed as a demo in case anybody wants to try it and dont have an ip cam readily accessible, possibly from some pre-recorded video from one of my cams.

1 Like