Show ip camera in dashboard and store prefixed buffer video

Hi, what I'm achieving to do is display an IP camera on the dashboard and store a prefix buffer video when I need, but starting some seconds before my trigger.
Thanks to all support inside the forum I'm able to display the IP camera via exec/daemon node without problem, with command like this:
ffmpeg -f rtsp -i "rtsp://admin:admin@192.168.1.15:554/12.264" -f image2pipe pipe:1
For storing a prefixed length video I use a command like this:

ffmpeg -rtsp_transport tcp -i "rtsp://admin:admin@192.168.1.15:554/12.264"" -c:v copy -t 60 -r 5 -s 640x480 

But in this way I force to have two processes for the same camera. Ok, I could use daemon node and pass the stram as stdin like -f image2pipe -i pipe:0
and save the output as mp3. But in this way I can record from that moment on. Instead, I would like to register starting 10 seconds before.
Practically, what I'm trying to do is:

  • catch a stream video from an IP camera and endless display it on dashboard;
  • temporary save it in a buffer (without saving it on the disk);
  • on my trigger, save a 60 seconds video but starting 10 seconds before my trigger.

I even thought of temporary saving the exec/daemon output with a join node and release them as unique buffer message. I've tried several way and encoding, but the file I get is always unreadable.
I hope to have been clear enough :grimacing: :grimacing: :grimacing: Thanks a lot

Doing some more reserch I found this page video capture - ffmpeg buffered recording - Video Production Stack Exchange that leads to these others two pages Capture/Lightning – FFmpeg and #1753 (Delay output for X seconds) – FFmpeg
But I cannot find the way to make it work

Hi @Lupin_III,
I haven't used ffmpeg for quite some time. So unfortunately I cannot help you. Although I need some similar stuff myself...
Normally I try to avoid mentioning people, I'm going to do it now. Perhaps one of the video hardcore club of Node-RED (@krambriw, @SuperNinja, @kevinGodell, @wb666greene ...) can give you some tips. Perhaps a little tip might get you going. To be able to store video in a simple way would be awesome...
Just out of interest: don't you risc consuming a large amount of RAM, by delaying the images?
Bart

Thanks a lot Bart

Just out of interest: don't you risc consuming a large amount of RAM, by delaying the images?

Yes it could be, but without trying I dont know. I need just 10 seconds of delay.

Meanwhile I came across with this ffmpeg option "-itsoffset", but it doesn't make any effect.

I'm not a video or ffmpeg expert but I do have a fair amount of experience with "real-world" usage.

John Riselvato has written a couple of books on ffmpeg and has made the scripts available for free on his website: FFmpeg - John Riselvato
Unfortunately the website is not well organized

Mostly the books are "howto" questions and scripts to implement the solution. Lots of links to the original sources. The books are available on Amazon as Kindle or paper versions.

Sounds to me like you are trying to re-invent the "pre-trigger" buffering of commercial security DVR/NVR system "motion detection".

@krambriw uses Motion or Motioneye to feed "motion events" to his AI for analysis.

I use my Security DVR to do 24/7/365 recordings and read its rtsp streams in my AI subsystem to push images to me when "something happens". I never look at the video unless something serious has happened, the pushed images give me the time index into the video of the event -- for example, last week two guys were fighting in the street and it turned into a car chase. We called the police, and I readied the video in case the police came back to ask for it, but they never did, I heard that they stopped both cars as they were exiting the subdivision. Our concern was that they would cause a wreck as they chased each other into the heavy traffic of the street that feeds our subdivision.

1 Like

Yes, it's practically the same concept, but using just node-red, without DVR and without continuously record a 24/7 video

@Lupin_III
Always nice with video users joining! You should check out the very interesting thread where we have discussed a lot about video. The most impressive in there is what @kevinGodell has achieved and kindly shared, fantastic nodes creating mp4 out of streams. Here is the thread [beta testing] nodes for live streaming mp4

It's a very long thread but so useful, informative and valuable. To make a long story shorter, the outcome is two sets of nodes, node-red-contrib-mp4frag and node-red-contrib-ui-mp4frag

With those nodes you can very easily build efficient flows converting/presenting mp4 streams in the dashboard. For ip cameras you could use simple flows like this:

So far this works very well for live streaming & presentation. The exec node is starting ffmpeg that is used to create the stream and if needed, convert it to mp4. It works with many video formats. Using the mp4 format has of course well known advantages and disadvantages I'm not going to dicuss.

What is not supported by those nodes is recording. I believe it would maybe be here, eventually with a new node, a recording node could find a warm home. Imagine a rec_mp4frag node that could capture the mp4 video in a preset sized buffer and then on a command, saving to file on a mounted SSD drive, starting with the pre-captured video and then add real time video until stopped

As mentioned, I'm personally using a software called Motion that has this feature built in but it would of course be very nice to have a pure NR solution. The question is how much CPU load this would cause. Could the use of the GPU help? Obviously the footprint in the memory will be larger, related to the configured buffer size. Also when the "start recording" begins, it might cause a heavy load joining the pre-recording with the live stream into a smooth video recording

Anyway very interesting topic, I hope @kevinGodell will join the discussion because he is our real video expert and I'm sure he can sort out what eventually is possible or not

1 Like

I would think that any recording should be done in either an external process or at least another thread - As node.js is inherently single threaded making it handle video processing / storage at the same time as processing the rest of the flow is bound to get "glitchy". Not to mention the potential volume of data you are talking about for even a few seconds of video if you are trying to use a small device like a Pi in a rotating buffer. I would think having Node-RED talk to / control another storage process would be the way forward.

2 Likes

Yes, most likely a separate process. But maybe it could take the already available mp4 stream from the output of the node-red-contrib-mp4frag node? I mean it is already available?

I am running Motion, where the feature is already available, on several RPi3's, I will try to check how the load & memory footprint is changing when a recording is happening whith a pre-buffer. In normal streaming operation, Motions load is neglectible

Just made a small test with Motion and NR, running in a RPi3B+, one usb camera connected, resolution 1024x768

Normal operation, no Buffering: CPU around 6-8%, MEM footprint 5%
Creation of Buffer, pre-capturing 100 frames: CPU around 6-8%, MEM footprint 16%
Creation of Buffer, pre-capturing 200 frames: CPU around 6-8%, MEM footprint 28%

Confirms that buffering of frames consumes a lot of memory. Surprisingly the CPU load was unchanged

Needless to say but the CPU load increased to around 45-50% while the video file was created, EDIT: that is while saving, in my case to a usb memory stick

Thanks to all. I didn't know that starting a topic using messages would be a private one. So I started this topic messaging to @kevinGodell that suggested me to put it as public.

  • I've tryed to install Kevin's node on Windows 10, but I always got some errors
C:\Users\Io\.node-red>npm install kevinGodell/node-red-contrib-mp4frag
npm ERR! code ENOENT
npm ERR! syscall spawn git
npm ERR! path git
npm ERR! errno -4058
npm ERR! enoent Error while executing:
npm ERR! enoent undefined ls-remote -h -t ssh://git@github.com/kevinGodell/node-red-contrib-mp4frag.git
npm ERR! enoent
npm ERR! enoent
npm ERR! enoent spawn git ENOENT
npm ERR! enoent This is related to npm not being able to find a file.
npm ERR! enoent

npm ERR! A complete log of this run can be found in:
npm ERR!     C:\Users\Io\AppData\Roaming\npm-cache\_logs\2021-01-25T14_18_42_434Z-debug.log

and even trying with yarn

> C:\Users\Io\.node-red>yarn add kevinGodell/node-red-contrib-mp4frag
yarn add v1.22.10
info No lockfile found.
warning package-lock.json found. Your project contains lock files generated by tools other than Yarn. It is advised not to mix package managers in order to avoid resolution inconsistencies caused by unsynchronized lock files. To clear this warning, remove package-lock.json.
[1/4] Resolving packages...
warning node-red-dashboard > socket.io > debug@4.1.1: Debug versions >=3.2.0 <3.2.7 || >=4 <4.3.1 have a low-severity ReDos regression when used in a Node.js environment. It is recommended you upgrade to 3.2.7 or 4.3.1. (https://github.com/visionmedia/debug/issues/797)
warning node-red-dashboard > socket.io > engine.io > debug@4.1.1: Debug versions >=3.2.0 <3.2.7 || >=4 <4.3.1 have a low-severity ReDos regression when used in a Node.js environment. It is recommended you upgrade to 3.2.7 or 4.3.1. (https://github.com/visionmedia/debug/issues/797)
warning node-red-dashboard > socket.io > socket.io-parser > debug@4.1.1: Debug versions >=3.2.0 <3.2.7 || >=4 <4.3.1 have a low-severity ReDos regression when used in a Node.js environment. It is recommended you upgrade to 3.2.7 or 4.3.1. (https://github.com/visionmedia/debug/issues/797)
warning node-red-node-openweathermap > request@2.88.2: request has been deprecated, see https://github.com/request/request/issues/3142
warning node-red-node-openweathermap > request > har-validator@5.1.5: this library is no longer supported
error Couldn't find the binary git
info Visit https://yarnpkg.com/en/docs/cli/add for documentation about this command.
  • I came across with this topic several time making research and it helped me so much. After your suggestion I read it from the beginning more carefully and I get more about streaming and recording video, but I cannot find a real solution to save a specified time length video with a pre-triggered timing.
  • Regarding to use an external process like motion, I would like to avoid it for these main reasons:
    -maybe it's something doing with my settings since my lack of knowledge, but at the moment my pi3 is consuming less CPU/memory than when I used motion to get an http/rtsp streaming from my ip camera.
    -I need a solution working even on Windows, and for what I know motion is available only for linux

What about the ffmpeg commands I highlighted? They seem doing what I need but when I try them nothing happens, just as they wouldn't be passed at all

I guess you have to wait for Kevin regarding those error messages and it is maybe also better to stick to first fixing everything working on your RPi instead of jumping between RPi & Windows etc.

When Motion was mentioned, it was just an example of an external application that has the requested features built-in and not a recommendation for you to use it. I said I use it and it is possible to control/start/stop recording to file (like mp4) including a pre-recorded buffer by sending http requests to Motion from NR. But I guess making a more "NR adopted" solution would eventually be to have a new node that starts an external process, captures the desired stream, creates a buffer and when told, starts creating a video file including and starting with the buffer content, until it is told to stop. This kind of node does not exist, at least not to my knowledge

Regarding ffmpeg, well I'm not sure it should be used for this type of video file creation if a node is developed, but I created a video file with this simple syntax:
ffmpeg -re -i http://192.168.0.237:8889/?action=stream /home/pi/video.mkv

Thanks Krambriw.
I need it works on both PC and PI since I just simple use both.
Maybe I've not been clear enough. At the moment I'm able to do it as a single process of streaming or a single process of storing a prefixed length ip camera video (but for sure I'll make some test even with your suggestion). What I'm not able to do is recording 10s of pre-trigger video.
I've seen several topics come up googling "ffmpeg lightning recorder". Maybe some of these can help you, that for sure have much more knowledge of ffmpeg than me

Well, Im not developing nodes myself, like you hoping to find something...until then using Motion :wink:

And you are right, this link describes a very interesting way. So it might be that a NR node simply could control ffmpeg and samplicate sessions for the task

https://trac.ffmpeg.org/wiki/Capture/Lightning

It seems that you don't have git installed. I found a similar problem with a recommended solution. Either git is not installed or you have not configure it to be in your PATH.

Thanks a lot Kevin. It works now, so I can try your big support with your node.

I still cannot find a way of pretrigger video recording.
Any suggestion is highly appreciated. Thanks a lot to all

There are many ways that come to mind, but it depends on exactly what you are trying to do:

  • Are you creating the recording on the same server where the mp4frag-node is creating video?
  • How are you triggering it to start the recording? (some type of motion detection node?)

Possible solutions include:

  • using ffmpeg in another node to read the video over http from the hls.m3u8 which will have a buffer of video that occured before your trigger. (possible problem is dealing with authentication if you have some type of middleware to verify client access)
  • I could make the internal mp4frag buffer available as an object reference in the node-red context which will make it available to other nodes to create a file stream from it. ( i am not sure if node-red will make a copy or just keep a reference to the object, definitely do not want a copy)
  • I could create a 2nd output on the mpfrag node that outputs an object wrapped around the internal mp4frag buffer with some convenience getters to access the buffered video.

@kevinGodell

Hi, assuming I now have a number of recorded video clips in usual mp4 format. They all plays well in vlc. I then tried to play them directly using your ui_mp4frag. Should that be possible?

My playlist looks like this, just holding the path and file name

{"hlsPlaylist":"/home/pi/pics/08-20210127122901-CAM41.mp4"}

The node says 'loaded' but nothing is shown. Maybe this should not work? Do I have to use ffmpeg, read the video file and transfer the data via the mp4frag and then ui_mp4frag?

  • Are you creating the recording on the same server where the mp4frag-node is creating video?

Yes

  • How are you triggering it to start the recording? (some type of motion detection node?)

Yes, exactly. At the end the trigger will be made by mean of an object recognition

Possible solutions include:
.......

You are the expert. Since it would be great having it working even on a PI, maybe it's important even consider its limit. But for what I'm testing and reading in this forum, it shouldn't be a big problem.
No middleware to verify client access

The ui_mp4frag would be unable to read directly from disk. It works by receiving the playlist that gives it the http routes to the various mp4 files, such as video.mp4, hls.m3u8, or via socket io.