Use ffmepg to create MP4 from JPEG images sent over MQTT

Hi all,
I'm working on a project with nest boxes, monitoring the birds in them. In order to do that, we use a camera as well as different sensors all controlled by an ESP32 Dev board. The sensor data and camera data are sent to a MQTT broker and displayed on a dashboard. However, instead of just showing the images I'd love to record some short video clips (MP4 format) that can be accessed from the dashboard.

Browsing this forum I came across the cctv nodes of @kevinGodell, and I thought to use the ffmpeg node in order to convert the incoming JPEG images into a MP4 clip. However, all projects I saw used HTTP as a protocol to communicate the camera data, while I'm using MQTT. I don't have much experience with node-red myself, so I was wondering whether it would be possible to use the nodes with MQTT as well. Do you guys have any clues? @krambriw @BartButenaers

Any help would be appreciated!

Regards,
Lennard

hi @lduynkerke. Welcome.

I think you are receiving the jpegs via mqtt and then they somehow make it to some display on your node-red.

There are maybe 2 ways that can be accomplished, but it depends on your needs. I will have to ask some questions to be able to give a better suggestion.

Are you receiving the jpegs at a constant interval such as 1 jpeg per 5 seconds or is it less frequent or erratic based on some triggering mechanism that detects motion or activity on the bird house/nest?

Are you saving the jpegs to disk on the node-red for later viewing?

If you are receiving the jpegs regularly, then creating mp4 video should be possible.

The other option is that if you are saving the jpegs to disk, then at some later time when you have enough of them, you could probably tell ffmpeg to create mp4 video that will pretty much be an image slideshow generated from the jpeg files.

Hi @kevinGodell,
Thanks for your quick response! Right now the jpegs are made at a constant interval (15 minutes) and stored to an Azure blob storage. I could apply the same mechanism on the video clips, but constant streaming costs too much electricity, since the nest boxes are working on a solar panel and some LiPO's only. So in order to create useful videoclips there'll be a trigger mechanism, after which I'd like to record a ~15 sec video with a frame rate of 10 FPS.

Since the infrastructure for the jpegs is already there, it seems logical to use that for the video clips too (the hardware is unfortunately not good enough to record a mp4 file and send it to the node-red). Ideally I'd like to convert the incoming jpegs into a mp4 file immediately, but it's also possible to store them in the Blob storage and run ffmpeg after I received all 150 frames. I could even use http to send the images if mqtt would be too slow or if that's easier to implement in node-red. But if mqtt is as easy to implement as http I would prefere mqtt, as the node-red flow and ESP32 code is already there.

I just don't know what would be the most straight-forward approach, since this is my first project with node-red. I'd love to hear your opinion! Thanks in advance!

Regards,
Lennard

I have a solution but is also need to use Python (since I like that)

In the following example I have a simple flow that just requests an image every second from a public camera and publish it to a MQTT broker. Very similar to what you do I think

[
    {
        "id": "ecc6c6fb8c950e4f",
        "type": "inject",
        "z": "aea09193ec9a7e2a",
        "name": "Stop stream",
        "props": [
            {
                "p": "stop",
                "v": "true",
                "vt": "bool"
            }
        ],
        "repeat": "",
        "crontab": "",
        "once": false,
        "onceDelay": "",
        "topic": "",
        "x": 130,
        "y": 110,
        "wires": [
            [
                "098062df0900fe5e"
            ]
        ]
    },
    {
        "id": "098062df0900fe5e",
        "type": "multipart-decoder",
        "z": "aea09193ec9a7e2a",
        "name": "",
        "ret": "bin",
        "url": "",
        "tls": "",
        "authentication": "none",
        "delay": 0,
        "maximum": "1700000",
        "blockSize": "1",
        "enableLog": "on",
        "x": 370,
        "y": 60,
        "wires": [
            [
                "efc5178ac411e16a"
            ],
            []
        ]
    },
    {
        "id": "7243d4f8d03405de",
        "type": "inject",
        "z": "aea09193ec9a7e2a",
        "name": "Working stream 1",
        "props": [
            {
                "p": "url",
                "v": "http://cam1.rauris.net/axis-cgi/mjpg/video.cgi",
                "vt": "str"
            }
        ],
        "repeat": "",
        "crontab": "",
        "once": false,
        "onceDelay": 0.1,
        "topic": "",
        "x": 140,
        "y": 60,
        "wires": [
            [
                "098062df0900fe5e"
            ]
        ]
    },
    {
        "id": "efc5178ac411e16a",
        "type": "mqtt out",
        "z": "aea09193ec9a7e2a",
        "name": "",
        "topic": "videoclip_test",
        "qos": "0",
        "retain": "",
        "respTopic": "",
        "contentType": "",
        "userProps": "",
        "correl": "",
        "expiry": "",
        "broker": "d25677b9.097f68",
        "x": 590,
        "y": 60,
        "wires": []
    },
    {
        "id": "f89aeeedf8f5e46b",
        "type": "mqtt in",
        "z": "aea09193ec9a7e2a",
        "name": "",
        "topic": "videoclip_test",
        "qos": "2",
        "datatype": "auto",
        "broker": "d25677b9.097f68",
        "nl": false,
        "rap": false,
        "inputs": 0,
        "x": 130,
        "y": 180,
        "wires": [
            [
                "9f742e8c39ddc986"
            ]
        ]
    },
    {
        "id": "9f742e8c39ddc986",
        "type": "image",
        "z": "aea09193ec9a7e2a",
        "name": "",
        "width": 160,
        "data": "payload",
        "dataType": "msg",
        "thumbnail": false,
        "active": true,
        "pass": false,
        "outputs": 0,
        "x": 370,
        "y": 180,
        "wires": []
    },
    {
        "id": "730385076178e5cd",
        "type": "mqtt out",
        "z": "aea09193ec9a7e2a",
        "name": "",
        "topic": "videoclip_test",
        "qos": "0",
        "retain": "",
        "respTopic": "",
        "contentType": "",
        "userProps": "",
        "correl": "",
        "expiry": "",
        "broker": "d25677b9.097f68",
        "x": 370,
        "y": 410,
        "wires": []
    },
    {
        "id": "24012662ca2ed0ad",
        "type": "inject",
        "z": "aea09193ec9a7e2a",
        "name": "",
        "props": [
            {
                "p": "payload"
            }
        ],
        "repeat": "",
        "crontab": "",
        "once": false,
        "onceDelay": 0.1,
        "topic": "",
        "payload": "Stop-img-capturing",
        "payloadType": "str",
        "x": 150,
        "y": 410,
        "wires": [
            [
                "730385076178e5cd"
            ]
        ]
    },
    {
        "id": "d25677b9.097f68",
        "type": "mqtt-broker",
        "name": "",
        "broker": "127.0.0.1",
        "port": "1883",
        "clientid": "",
        "usetls": false,
        "protocolVersion": "4",
        "keepalive": "60",
        "cleansession": true,
        "birthTopic": "",
        "birthQos": "0",
        "birthPayload": "",
        "birthMsg": {},
        "closeTopic": "",
        "closeQos": "0",
        "closePayload": "",
        "closeMsg": {},
        "willTopic": "",
        "willQos": "0",
        "willPayload": "",
        "willMsg": {},
        "sessionExpiry": ""
    }
]

Then I have a Python script (code below) that subscribes to the topic, receives the image frames and does some stuff, then saves them to an .avi file that plays fine with VLC

I run the Python script in a Raspberry Pi from /home/pi and the .avi fille is saved there as well. Obviously this script can be tuned to whatever needs one might have so this is just an idea

To make the Python script work you need to know how to install necessary Python libraries listed in the file under import section. If you feel you are not comfortable with Python, other solutions might come up here in this thread

#!/usr/bin/python

# import the necessary packages
import paho.mqtt.client as mqtt
import numpy as np
import cv2
from threading import Event, Thread


def mqttConnector(mqtt_thread_Event, client):
    fourcc = cv2.VideoWriter_fourcc(*'MJPG')
    result = cv2.VideoWriter("clip.avi", fourcc, 1.0, (704, 576))
    global mqttBroker
    global th_abort

    def on_connect(client, userdata, flags, rc):
        print("Connected to broker:", rc)
        client.subscribe("videoclip_test", 0) 
    
    
    def on_subscribe(client, userdata, mid, granted_qos):
        print ('Subscribed to topic:', userdata, mid, granted_qos)


    def on_message(client, userdata, msg):
        global th_abort
        global frame

        if(len(msg.payload)>1000):
            img = msg.payload
            start = img[0:2]
            end = img[-2:]
            
            if start == b'\xff\xd8' and end == b'\xff\xd9':
#                print (start, end)
                # we have a complete jpeg
                # convert string of image data to uint8
                nparr = np.frombuffer(img, np.uint8)
                # decode image
                img = cv2.imdecode(nparr, cv2.IMREAD_COLOR)
                (H, W) = img.shape[:2]
                frame += 1
                print (frame, W, H)
                result.write(img)

        if msg.payload.decode("utf-8") == 'Stop-img-capturing':
            print (msg.payload.decode("utf-8"))
            th_abort = True
            

    client.on_connect = on_connect
    client.on_message = on_message
    client.on_subscribe = on_subscribe
    resp = client.connect(mqttBroker, 1883, 60)
    client.loop_start()
    while not th_abort:
        mqtt_thread_Event.wait(0.1)
    client.loop_stop()
    client.disconnect()
    print ("mqttConnector: stopped...")
    exit()


# inits
th_abort = False
mqttBroker = '127.0.0.1' 
frame = 0

client = mqtt.Mosquitto()

mqtt_thread_Event = Event()
mqtt_thread = Thread(target=mqttConnector, args=(mqtt_thread_Event,client,))
mqtt_thread.start()  # start the child thread


1 Like

That should be ok. I just ran a little experiment where I used ffmpeg to create mp4 video from an rtsp ip camera and output jpegs at a regular interval. Then, I fed those jpegs into another ffmpeg to create mp4 video from that.

One catch is that we will have to be specific with giving ffmpeg the correct info as it will not be able to guess the frame rates and maybe some other details. Also, since there will be transcoding, a high cpu load is expected unless you have access to hardware accelerated encoders.

What is the resolution of the incoming jpegs (width x height)?

Can you check what h264 encoders you have available in ffmpeg and post the output?
ffmpeg -encoders|grep h264

ffmpeg version 4.3.4-0+deb11u1+rpt3 Copyright (c) 2000-2021 the FFmpeg developers
  built with gcc 10 (Debian 10.2.1-6)
  configuration: --prefix=/usr --extra-version=0+deb11u1+rpt3 --toolchain=hardened --incdir=/usr/include/aarch64-linux-gnu --enable-gpl --disable-stripping --enable-avresample --disable-filter=resample --enable-gnutls --enable-ladspa --enable-libaom --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libcdio --enable-libcodec2 --enable-libdav1d --enable-libflite --enable-libfontconfig --enable-libfreetype --enable-libfribidi --enable-libgme --enable-libgsm --enable-libjack --enable-libmp3lame --enable-libmysofa --enable-libopenjpeg --enable-libopenmpt --enable-libopus --enable-libpulse --enable-librabbitmq --enable-librsvg --enable-librubberband --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libsrt --enable-libssh --enable-libtheora --enable-libtwolame --enable-libvidstab --enable-libvorbis --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx265 --enable-libxml2 --enable-libxvid --enable-libzmq --enable-libzvbi --enable-lv2 --enable-omx --enable-openal --enable-opencl --enable-opengl --enable-sdl2 --disable-mmal --enable-neon --enable-v4l2-request --enable-libudev --enable-epoxy --enable-sand --libdir=/usr/lib/aarch64-linux-gnu --arch=arm64 --enable-pocketsphinx --enable-libdc1394 --enable-libdrm --enable-vout-drm --enable-libiec61883 --enable-chromaprint --enable-frei0r --enable-libx264 --enable-shared
  libavutil      56. 51.100 / 56. 51.100
  libavcodec     58. 91.100 / 58. 91.100
  libavformat    58. 45.100 / 58. 45.100
  libavdevice    58. 10.100 / 58. 10.100
  libavfilter     7. 85.100 /  7. 85.100
  libavresample   4.  0.  0 /  4.  0.  0
  libswscale      5.  7.100 /  5.  7.100
  libswresample   3.  7.100 /  3.  7.100
  libpostproc    55.  7.100 / 55.  7.100
 V..... libx264              libx264 H.264 / AVC / MPEG-4 AVC / MPEG-4 part 10 (codec h264)
 V..... libx264rgb           libx264 H.264 / AVC / MPEG-4 AVC / MPEG-4 part 10 RGB (codec h264)
 V..... h264_omx             OpenMAX IL H.264 video encoder (codec h264)
 V..... h264_v4l2m2m         V4L2 mem2mem H.264 encoder wrapper (codec h264)
 V..... h264_vaapi           H.264/AVC (VAAPI) (codec h264)

I just played a bit more with the Python way. I have created a small Video demo on my Google Drive that demonstrates the usage. Also updated the script so it accepts command line params. Quality of .avi seems to be fine, no transcoding needed in that case

#!/usr/bin/python

# Example usage:
# python3 buffer_to_video.py -wd 704 -ht 576 -f 60 -fps 1 -cn clip1.avi -brt topic 

# import the necessary packages
import paho.mqtt.client as mqtt
import numpy as np
import cv2
import argparse
from threading import Event, Thread


def mqttConnector(mqtt_thread_Event, client, args):
    print (args)
    fourcc = cv2.VideoWriter_fourcc(*'MJPG')
    result = cv2.VideoWriter(args['cname'], fourcc, int(args['fps']), (int(args['width']), int(args['height'])))
    global mqttBroker
    global th_abort

    def on_connect(client, userdata, flags, rc):
        print("Connected to broker:", rc)
        client.subscribe(args['brtopic'], 0) 
    
    
    def on_subscribe(client, userdata, mid, granted_qos):
        print ('Subscribed to topic:', userdata, mid, granted_qos)


    def on_message(client, userdata, msg):
        global th_abort
        global frame

        if(len(msg.payload)>1000):
            img = msg.payload
            start = img[0:2]
            end = img[-2:]
            
            if start == b'\xff\xd8' and end == b'\xff\xd9':
#                print (start, end)
                # we have a complete jpeg
                # convert string of image data to uint8
                nparr = np.frombuffer(img, np.uint8)
                # decode image
                img = cv2.imdecode(nparr, cv2.IMREAD_COLOR)
                (H, W) = img.shape[:2]
                frame += 1
                print (frame, W, H)
                result.write(img)
                if frame >= int(args['frames']):
                    print ('clip finalized')
                    th_abort = True
         

    client.on_connect = on_connect
    client.on_message = on_message
    client.on_subscribe = on_subscribe
    resp = client.connect(mqttBroker, 1883, 60)
    client.loop_start()
    while not th_abort:
        mqtt_thread_Event.wait(0.1)
    client.loop_stop()
    client.disconnect()
    print ("mqttConnector: stopped...")
    exit()


# inits
th_abort = False
mqttBroker = '127.0.0.1' 
frame = 0

ap = argparse.ArgumentParser()
ap.add_argument(
    "-wd",
    "--width", 
    required=True, 
    help="image width"
)
ap.add_argument(
    "-ht",
    "--height", 
    required=True, 
    help="image height"
)
ap.add_argument(
    "-f",
    "--frames", 
    required=True, 
    help="frames to capture"
)
ap.add_argument(
    "-fr",
    "--fps", 
    required=True, 
    help="framerate"
)
ap.add_argument(
    "-cn",
    "--cname", 
    required=True, 
    help="name and path for video clip"
)
ap.add_argument(
    "-brt",
    "--brtopic", 
    required=True, 
    help="mqtt broker topic"
)
args = vars(ap.parse_args())

client = mqtt.Mosquitto()

mqtt_thread_Event = Event()
mqtt_thread = Thread(target=mqttConnector, args=(mqtt_thread_Event,client,args,))
mqtt_thread.start()  # start the child thread

Walter, can you include your new flow example?

Oh yes, sorry

Some words about it, you need to install some nodes you might miss

  • node-red-contrib-multipart-stream-decoder
  • node-red-contrib-image-output

In the flow exec node configurations you can see I use uxterm as terminal in my RPi. You can use the standard lxterminal instead

I did put the Python script into my /home/pi folder

If the examples doesn't work straight away it could be you need to double-check the size of your images from the camera and then adjust the params accordingly

Best regards, Walter

[
    {
        "id": "ecc6c6fb8c950e4f",
        "type": "inject",
        "z": "aea09193ec9a7e2a",
        "name": "Stop stream",
        "props": [
            {
                "p": "stop",
                "v": "true",
                "vt": "bool"
            }
        ],
        "repeat": "",
        "crontab": "",
        "once": false,
        "onceDelay": "",
        "topic": "",
        "x": 130,
        "y": 110,
        "wires": [
            [
                "098062df0900fe5e"
            ]
        ]
    },
    {
        "id": "098062df0900fe5e",
        "type": "multipart-decoder",
        "z": "aea09193ec9a7e2a",
        "name": "",
        "ret": "bin",
        "url": "",
        "tls": "",
        "authentication": "none",
        "delay": 0,
        "maximum": "1700000",
        "blockSize": "1",
        "enableLog": "off",
        "x": 370,
        "y": 60,
        "wires": [
            [
                "efc5178ac411e16a"
            ],
            []
        ]
    },
    {
        "id": "7243d4f8d03405de",
        "type": "inject",
        "z": "aea09193ec9a7e2a",
        "name": "Working stream 1",
        "props": [
            {
                "p": "url",
                "v": "http://cam1.rauris.net/axis-cgi/mjpg/video.cgi",
                "vt": "str"
            }
        ],
        "repeat": "",
        "crontab": "",
        "once": false,
        "onceDelay": 0.1,
        "topic": "",
        "x": 140,
        "y": 60,
        "wires": [
            [
                "098062df0900fe5e"
            ]
        ]
    },
    {
        "id": "efc5178ac411e16a",
        "type": "mqtt out",
        "z": "aea09193ec9a7e2a",
        "name": "",
        "topic": "videoclip_test1",
        "qos": "0",
        "retain": "",
        "respTopic": "",
        "contentType": "",
        "userProps": "",
        "correl": "",
        "expiry": "",
        "broker": "d25677b9.097f68",
        "x": 590,
        "y": 60,
        "wires": []
    },
    {
        "id": "f89aeeedf8f5e46b",
        "type": "mqtt in",
        "z": "aea09193ec9a7e2a",
        "name": "",
        "topic": "videoclip_test1",
        "qos": "2",
        "datatype": "auto",
        "broker": "d25677b9.097f68",
        "nl": false,
        "rap": false,
        "inputs": 0,
        "x": 140,
        "y": 320,
        "wires": [
            [
                "9f742e8c39ddc986"
            ]
        ]
    },
    {
        "id": "9f742e8c39ddc986",
        "type": "image",
        "z": "aea09193ec9a7e2a",
        "name": "",
        "width": 160,
        "data": "payload",
        "dataType": "msg",
        "thumbnail": false,
        "active": true,
        "pass": false,
        "outputs": 0,
        "x": 370,
        "y": 320,
        "wires": []
    },
    {
        "id": "694882bcdf8e3d8c",
        "type": "inject",
        "z": "aea09193ec9a7e2a",
        "name": "",
        "props": [
            {
                "p": "payload"
            },
            {
                "p": "topic",
                "vt": "str"
            }
        ],
        "repeat": "",
        "crontab": "",
        "once": false,
        "onceDelay": 0.1,
        "topic": "",
        "payload": "true",
        "payloadType": "bool",
        "x": 110,
        "y": 600,
        "wires": [
            [
                "70057cfec9b88995"
            ]
        ]
    },
    {
        "id": "70057cfec9b88995",
        "type": "exec",
        "z": "aea09193ec9a7e2a",
        "command": "export DISPLAY=:0 && uxterm -e python3 buffer_to_video.py -wd 704 -ht 576 -f 60 -fr 1 -cn /home/pi/clip1.avi -brt videoclip_test1 ",
        "addpay": "",
        "append": "",
        "useSpawn": "false",
        "timer": "",
        "winHide": false,
        "oldrc": false,
        "name": "",
        "x": 720,
        "y": 600,
        "wires": [
            [],
            [],
            []
        ]
    },
    {
        "id": "746ee5747065054a",
        "type": "inject",
        "z": "aea09193ec9a7e2a",
        "name": "Stop stream",
        "props": [
            {
                "p": "stop",
                "v": "true",
                "vt": "bool"
            }
        ],
        "repeat": "",
        "crontab": "",
        "once": false,
        "onceDelay": "",
        "topic": "",
        "x": 130,
        "y": 230,
        "wires": [
            [
                "f3f4481f8fa16397"
            ]
        ]
    },
    {
        "id": "f3f4481f8fa16397",
        "type": "multipart-decoder",
        "z": "aea09193ec9a7e2a",
        "name": "",
        "ret": "bin",
        "url": "",
        "tls": "",
        "authentication": "none",
        "delay": 0,
        "maximum": "1700000",
        "blockSize": "1",
        "enableLog": "off",
        "x": 370,
        "y": 180,
        "wires": [
            [
                "ca34ed126ab14fef"
            ],
            []
        ]
    },
    {
        "id": "d97b1790c0382bd6",
        "type": "inject",
        "z": "aea09193ec9a7e2a",
        "name": "Working stream 2",
        "props": [
            {
                "p": "url",
                "v": "http://192.168.0.235:8081",
                "vt": "str"
            }
        ],
        "repeat": "",
        "crontab": "",
        "once": false,
        "onceDelay": 0.1,
        "topic": "",
        "x": 140,
        "y": 180,
        "wires": [
            [
                "f3f4481f8fa16397"
            ]
        ]
    },
    {
        "id": "ca34ed126ab14fef",
        "type": "mqtt out",
        "z": "aea09193ec9a7e2a",
        "name": "",
        "topic": "videoclip_test2",
        "qos": "0",
        "retain": "",
        "respTopic": "",
        "contentType": "",
        "userProps": "",
        "correl": "",
        "expiry": "",
        "broker": "d25677b9.097f68",
        "x": 590,
        "y": 180,
        "wires": []
    },
    {
        "id": "76ad315691065e65",
        "type": "inject",
        "z": "aea09193ec9a7e2a",
        "name": "",
        "props": [
            {
                "p": "payload"
            },
            {
                "p": "topic",
                "vt": "str"
            }
        ],
        "repeat": "",
        "crontab": "",
        "once": false,
        "onceDelay": 0.1,
        "topic": "",
        "payload": "true",
        "payloadType": "bool",
        "x": 110,
        "y": 660,
        "wires": [
            [
                "0a811f8124664908"
            ]
        ]
    },
    {
        "id": "0a811f8124664908",
        "type": "exec",
        "z": "aea09193ec9a7e2a",
        "command": "export DISPLAY=:0 && uxterm -e python3 buffer_to_video.py -wd 1024 -ht 768 -f 15 -fr 1 -cn /home/pi/clip2.avi -brt videoclip_test2 ",
        "addpay": "",
        "append": "",
        "useSpawn": "false",
        "timer": "",
        "winHide": false,
        "oldrc": false,
        "name": "",
        "x": 720,
        "y": 660,
        "wires": [
            [],
            [],
            []
        ]
    },
    {
        "id": "8afd3ccf4d243154",
        "type": "mqtt in",
        "z": "aea09193ec9a7e2a",
        "name": "",
        "topic": "videoclip_test2",
        "qos": "2",
        "datatype": "auto",
        "broker": "d25677b9.097f68",
        "nl": false,
        "rap": false,
        "inputs": 0,
        "x": 590,
        "y": 320,
        "wires": [
            [
                "4b5ae3139c843352"
            ]
        ]
    },
    {
        "id": "4b5ae3139c843352",
        "type": "image",
        "z": "aea09193ec9a7e2a",
        "name": "",
        "width": 160,
        "data": "payload",
        "dataType": "msg",
        "thumbnail": false,
        "active": true,
        "pass": false,
        "outputs": 0,
        "x": 820,
        "y": 320,
        "wires": []
    },
    {
        "id": "d25677b9.097f68",
        "type": "mqtt-broker",
        "name": "",
        "broker": "127.0.0.1",
        "port": "1883",
        "clientid": "",
        "usetls": false,
        "protocolVersion": "4",
        "keepalive": "60",
        "cleansession": true,
        "birthTopic": "",
        "birthQos": "0",
        "birthPayload": "",
        "birthMsg": {},
        "closeTopic": "",
        "closeQos": "0",
        "closePayload": "",
        "closeMsg": {},
        "willTopic": "",
        "willQos": "0",
        "willPayload": "",
        "willMsg": {},
        "sessionExpiry": ""
    }
]

Hi guys,

Thank you so much for your replies! So sorry for my delay though, I've been really busy with some other projects lately.

@krambriw I actually like the Python approach a lot, although I don't know how fast it is compared to the ffmpeg nodes? For now there are a 30 nest boxes, but we would like to scale this to 200 in the near future. Therefore, I'd like to build a flow that is scalable and I can't really judge whether the Python script is scalable or not. Do you have any insights on this?

@kevinGodell For now I'm running a test server on a Raspberry Pi, we will be running it in a Docker container eventually. The following encoders are available to me rn:

lennard@raspberrypi:~ $ ffmpeg -encoders | grep h264
ffmpeg version 4.3.6-0+deb11u1+rpt1 Copyright (c) 2000-2023 the FFmpeg developers
  built with gcc 10 (Raspbian 10.2.1-6+rpi1)
  configuration: --prefix=/usr --extra-version=0+deb11u1+rpt1 --toolchain=hardened --incdir=/usr/include/arm-linux-gnueabihf --enable-gpl --disable-stripping --enable-avresample --disable-filter=resample --enable-gnutls --enable-ladspa --enable-libaom --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libcdio --enable-libcodec2 --enable-libdav1d --enable-libflite --enable-libfontconfig --enable-libfreetype --enable-libfribidi --enable-libgme --enable-libgsm --enable-libjack --enable-libmp3lame --enable-libmysofa --enable-libopenjpeg --enable-libopenmpt --enable-libopus --enable-libpulse --enable-librabbitmq --enable-librsvg --enable-librubberband --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libsrt --enable-libssh --enable-libtheora --enable-libtwolame --enable-libvidstab --enable-libvorbis --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx265 --enable-libxml2 --enable-libxvid --enable-libzmq --enable-libzvbi --enable-lv2 --enable-omx --enable-openal --enable-opencl --enable-opengl --enable-sdl2 --enable-mmal --enable-neon --enable-rpi --enable-v4l2-request --enable-libudev --enable-epoxy --enable-pocketsphinx --enable-libdc1394 --enable-libdrm --enable-vout-drm --enable-libiec61883 --enable-chromaprint --enable-frei0r --enable-libx264 --enable-shared --libdir=/usr/lib/arm-linux-gnueabihf --cpu=arm1176jzf-s --arch=arm
  WARNING: library configuration mismatch
  avutil      configuration: --prefix=/usr --extra-version=0+deb11u1+rpt1 --toolchain=hardened --incdir=/usr/include/arm-linux-gnueabihf --enable-gpl --disable-stripping --enable-avresample --disable-filter=resample --enable-gnutls --enable-ladspa --enable-libaom --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libcdio --enable-libcodec2 --enable-libdav1d --enable-libflite --enable-libfontconfig --enable-libfreetype --enable-libfribidi --enable-libgme --enable-libgsm --enable-libjack --enable-libmp3lame --enable-libmysofa --enable-libopenjpeg --enable-libopenmpt --enable-libopus --enable-libpulse --enable-librabbitmq --enable-librsvg --enable-librubberband --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libsrt --enable-libssh --enable-libtheora --enable-libtwolame --enable-libvidstab --enable-libvorbis --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx265 --enable-libxml2 --enable-libxvid --enable-libzmq --enable-libzvbi --enable-lv2 --enable-omx --enable-openal --enable-opencl --enable-opengl --enable-sdl2 --enable-mmal --enable-neon --enable-rpi --enable-v4l2-request --enable-libudev --enable-epoxy --enable-pocketsphinx --enable-libdc1394 --enable-libdrm --enable-vout-drm --enable-libiec61883 --enable-chromaprint --enable-frei0r --enable-libx264 --libdir=/usr/lib/arm-linux-gnueabihf/neon/vfp --cpu=cortex-a7 --arch=armv6t2 --disable-thumb --enable-shared --disable-doc --disable-programs
  avcodec     configuration: --prefix=/usr --extra-version=0+deb11u1+rpt1 --toolchain=hardened --incdir=/usr/include/arm-linux-gnueabihf --enable-gpl --disable-stripping --enable-avresample --disable-filter=resample --enable-gnutls --enable-ladspa --enable-libaom --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libcdio --enable-libcodec2 --enable-libdav1d --enable-libflite --enable-libfontconfig --enable-libfreetype --enable-libfribidi --enable-libgme --enable-libgsm --enable-libjack --enable-libmp3lame --enable-libmysofa --enable-libopenjpeg --enable-libopenmpt --enable-libopus --enable-libpulse --enable-librabbitmq --enable-librsvg --enable-librubberband --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libsrt --enable-libssh --enable-libtheora --enable-libtwolame --enable-libvidstab --enable-libvorbis --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx265 --enable-libxml2 --enable-libxvid --enable-libzmq --enable-libzvbi --enable-lv2 --enable-omx --enable-openal --enable-opencl --enable-opengl --enable-sdl2 --enable-mmal --enable-neon --enable-rpi --enable-v4l2-request --enable-libudev --enable-epoxy --enable-pocketsphinx --enable-libdc1394 --enable-libdrm --enable-vout-drm --enable-libiec61883 --enable-chromaprint --enable-frei0r --enable-libx264 --libdir=/usr/lib/arm-linux-gnueabihf/neon/vfp --cpu=cortex-a7 --arch=armv6t2 --disable-thumb --enable-shared --disable-doc --disable-programs
  avformat    configuration: --prefix=/usr --extra-version=0+deb11u1+rpt1 --toolchain=hardened --incdir=/usr/include/arm-linux-gnueabihf --enable-gpl --disable-stripping --enable-avresample --disable-filter=resample --enable-gnutls --enable-ladspa --enable-libaom --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libcdio --enable-libcodec2 --enable-libdav1d --enable-libflite --enable-libfontconfig --enable-libfreetype --enable-libfribidi --enable-libgme --enable-libgsm --enable-libjack --enable-libmp3lame --enable-libmysofa --enable-libopenjpeg --enable-libopenmpt --enable-libopus --enable-libpulse --enable-librabbitmq --enable-librsvg --enable-librubberband --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libsrt --enable-libssh --enable-libtheora --enable-libtwolame --enable-libvidstab --enable-libvorbis --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx265 --enable-libxml2 --enable-libxvid --enable-libzmq --enable-libzvbi --enable-lv2 --enable-omx --enable-openal --enable-opencl --enable-opengl --enable-sdl2 --enable-mmal --enable-neon --enable-rpi --enable-v4l2-request --enable-libudev --enable-epoxy --enable-pocketsphinx --enable-libdc1394 --enable-libdrm --enable-vout-drm --enable-libiec61883 --enable-chromaprint --enable-frei0r --enable-libx264 --libdir=/usr/lib/arm-linux-gnueabihf/neon/vfp --cpu=cortex-a7 --arch=armv6t2 --disable-thumb --enable-shared --disable-doc --disable-programs
  avdevice    configuration: --prefix=/usr --extra-version=0+deb11u1+rpt1 --toolchain=hardened --incdir=/usr/include/arm-linux-gnueabihf --enable-gpl --disable-stripping --enable-avresample --disable-filter=resample --enable-gnutls --enable-ladspa --enable-libaom --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libcdio --enable-libcodec2 --enable-libdav1d --enable-libflite --enable-libfontconfig --enable-libfreetype --enable-libfribidi --enable-libgme --enable-libgsm --enable-libjack --enable-libmp3lame --enable-libmysofa --enable-libopenjpeg --enable-libopenmpt --enable-libopus --enable-libpulse --enable-librabbitmq --enable-librsvg --enable-librubberband --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libsrt --enable-libssh --enable-libtheora --enable-libtwolame --enable-libvidstab --enable-libvorbis --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx265 --enable-libxml2 --enable-libxvid --enable-libzmq --enable-libzvbi --enable-lv2 --enable-omx --enable-openal --enable-opencl --enable-opengl --enable-sdl2 --enable-mmal --enable-neon --enable-rpi --enable-v4l2-request --enable-libudev --enable-epoxy --enable-pocketsphinx --enable-libdc1394 --enable-libdrm --enable-vout-drm --enable-libiec61883 --enable-chromaprint --enable-frei0r --enable-libx264 --libdir=/usr/lib/arm-linux-gnueabihf/neon/vfp --cpu=cortex-a7 --arch=armv6t2 --disable-thumb --enable-shared --disable-doc --disable-programs
  avfilter    configuration: --prefix=/usr --extra-version=0+deb11u1+rpt1 --toolchain=hardened --incdir=/usr/include/arm-linux-gnueabihf --enable-gpl --disable-stripping --enable-avresample --disable-filter=resample --enable-gnutls --enable-ladspa --enable-libaom --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libcdio --enable-libcodec2 --enable-libdav1d --enable-libflite --enable-libfontconfig --enable-libfreetype --enable-libfribidi --enable-libgme --enable-libgsm --enable-libjack --enable-libmp3lame --enable-libmysofa --enable-libopenjpeg --enable-libopenmpt --enable-libopus --enable-libpulse --enable-librabbitmq --enable-librsvg --enable-librubberband --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libsrt --enable-libssh --enable-libtheora --enable-libtwolame --enable-libvidstab --enable-libvorbis --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx265 --enable-libxml2 --enable-libxvid --enable-libzmq --enable-libzvbi --enable-lv2 --enable-omx --enable-openal --enable-opencl --enable-opengl --enable-sdl2 --enable-mmal --enable-neon --enable-rpi --enable-v4l2-request --enable-libudev --enable-epoxy --enable-pocketsphinx --enable-libdc1394 --enable-libdrm --enable-vout-drm --enable-libiec61883 --enable-chromaprint --enable-frei0r --enable-libx264 --libdir=/usr/lib/arm-linux-gnueabihf/neon/vfp --cpu=cortex-a7 --arch=armv6t2 --disable-thumb --enable-shared --disable-doc --disable-programs
  avresample  configuration: --prefix=/usr --extra-version=0+deb11u1+rpt1 --toolchain=hardened --incdir=/usr/include/arm-linux-gnueabihf --enable-gpl --disable-stripping --enable-avresample --disable-filter=resample --enable-gnutls --enable-ladspa --enable-libaom --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libcdio --enable-libcodec2 --enable-libdav1d --enable-libflite --enable-libfontconfig --enable-libfreetype --enable-libfribidi --enable-libgme --enable-libgsm --enable-libjack --enable-libmp3lame --enable-libmysofa --enable-libopenjpeg --enable-libopenmpt --enable-libopus --enable-libpulse --enable-librabbitmq --enable-librsvg --enable-librubberband --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libsrt --enable-libssh --enable-libtheora --enable-libtwolame --enable-libvidstab --enable-libvorbis --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx265 --enable-libxml2 --enable-libxvid --enable-libzmq --enable-libzvbi --enable-lv2 --enable-omx --enable-openal --enable-opencl --enable-opengl --enable-sdl2 --enable-mmal --enable-neon --enable-rpi --enable-v4l2-request --enable-libudev --enable-epoxy --enable-pocketsphinx --enable-libdc1394 --enable-libdrm --enable-vout-drm --enable-libiec61883 --enable-chromaprint --enable-frei0r --enable-libx264 --libdir=/usr/lib/arm-linux-gnueabihf/neon/vfp --cpu=cortex-a7 --arch=armv6t2 --disable-thumb --enable-shared --disable-doc --disable-programs
  swscale     configuration: --prefix=/usr --extra-version=0+deb11u1+rpt1 --toolchain=hardened --incdir=/usr/include/arm-linux-gnueabihf --enable-gpl --disable-stripping --enable-avresample --disable-filter=resample --enable-gnutls --enable-ladspa --enable-libaom --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libcdio --enable-libcodec2 --enable-libdav1d --enable-libflite --enable-libfontconfig --enable-libfreetype --enable-libfribidi --enable-libgme --enable-libgsm --enable-libjack --enable-libmp3lame --enable-libmysofa --enable-libopenjpeg --enable-libopenmpt --enable-libopus --enable-libpulse --enable-librabbitmq --enable-librsvg --enable-librubberband --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libsrt --enable-libssh --enable-libtheora --enable-libtwolame --enable-libvidstab --enable-libvorbis --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx265 --enable-libxml2 --enable-libxvid --enable-libzmq --enable-libzvbi --enable-lv2 --enable-omx --enable-openal --enable-opencl --enable-opengl --enable-sdl2 --enable-mmal --enable-neon --enable-rpi --enable-v4l2-request --enable-libudev --enable-epoxy --enable-pocketsphinx --enable-libdc1394 --enable-libdrm --enable-vout-drm --enable-libiec61883 --enable-chromaprint --enable-frei0r --enable-libx264 --libdir=/usr/lib/arm-linux-gnueabihf/neon/vfp --cpu=cortex-a7 --arch=armv6t2 --disable-thumb --enable-shared --disable-doc --disable-programs
  swresample  configuration: --prefix=/usr --extra-version=0+deb11u1+rpt1 --toolchain=hardened --incdir=/usr/include/arm-linux-gnueabihf --enable-gpl --disable-stripping --enable-avresample --disable-filter=resample --enable-gnutls --enable-ladspa --enable-libaom --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libcdio --enable-libcodec2 --enable-libdav1d --enable-libflite --enable-libfontconfig --enable-libfreetype --enable-libfribidi --enable-libgme --enable-libgsm --enable-libjack --enable-libmp3lame --enable-libmysofa --enable-libopenjpeg --enable-libopenmpt --enable-libopus --enable-libpulse --enable-librabbitmq --enable-librsvg --enable-librubberband --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libsrt --enable-libssh --enable-libtheora --enable-libtwolame --enable-libvidstab --enable-libvorbis --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx265 --enable-libxml2 --enable-libxvid --enable-libzmq --enable-libzvbi --enable-lv2 --enable-omx --enable-openal --enable-opencl --enable-opengl --enable-sdl2 --enable-mmal --enable-neon --enable-rpi --enable-v4l2-request --enable-libudev --enable-epoxy --enable-pocketsphinx --enable-libdc1394 --enable-libdrm --enable-vout-drm --enable-libiec61883 --enable-chromaprint --enable-frei0r --enable-libx264 --libdir=/usr/lib/arm-linux-gnueabihf/neon/vfp --cpu=cortex-a7 --arch=armv6t2 --disable-thumb --enable-shared --disable-doc --disable-programs
  postproc    configuration: --prefix=/usr --extra-version=0+deb11u1+rpt1 --toolchain=hardened --incdir=/usr/include/arm-linux-gnueabihf --enable-gpl --disable-stripping --enable-avresample --disable-filter=resample --enable-gnutls --enable-ladspa --enable-libaom --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libcdio --enable-libcodec2 --enable-libdav1d --enable-libflite --enable-libfontconfig --enable-libfreetype --enable-libfribidi --enable-libgme --enable-libgsm --enable-libjack --enable-libmp3lame --enable-libmysofa --enable-libopenjpeg --enable-libopenmpt --enable-libopus --enable-libpulse --enable-librabbitmq --enable-librsvg --enable-librubberband --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libsrt --enable-libssh --enable-libtheora --enable-libtwolame --enable-libvidstab --enable-libvorbis --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx265 --enable-libxml2 --enable-libxvid --enable-libzmq --enable-libzvbi --enable-lv2 --enable-omx --enable-openal --enable-opencl --enable-opengl --enable-sdl2 --enable-mmal --enable-neon --enable-rpi --enable-v4l2-request --enable-libudev --enable-epoxy --enable-pocketsphinx --enable-libdc1394 --enable-libdrm --enable-vout-drm --enable-libiec61883 --enable-chromaprint --enable-frei0r --enable-libx264 --libdir=/usr/lib/arm-linux-gnueabihf/neon/vfp --cpu=cortex-a7 --arch=armv6t2 --disable-thumb --enable-shared --disable-doc --disable-programs
  libavutil      56. 51.100 / 56. 51.100
  libavcodec     58. 91.100 / 58. 91.100
  libavformat    58. 45.100 / 58. 45.100
  libavdevice    58. 10.100 / 58. 10.100
  libavfilter     7. 85.100 /  7. 85.100
  libavresample   4.  0.  0 /  4.  0.  0
  libswscale      5.  7.100 /  5.  7.100
  libswresample   3.  7.100 /  3.  7.100
  libpostproc    55.  7.100 / 55.  7.100
 V..... libx264              libx264 H.264 / AVC / MPEG-4 AVC / MPEG-4 part 10 (codec h264)
 V..... libx264rgb           libx264 H.264 / AVC / MPEG-4 AVC / MPEG-4 part 10 RGB (codec h264)
 V..... h264_omx             OpenMAX IL H.264 video encoder (codec h264)
 V..... h264_v4l2m2m         V4L2 mem2mem H.264 encoder wrapper (codec h264)
 V..... h264_vaapi           H.264/AVC (VAAPI) (codec h264)

Regards,
Lennard

What do you define as scalable? My example shows you how the flow is scaled up for two sources. It is the same script that takes parametres that could be camera specific (like resolution and fps). But it is just one single "generic" Python script that can be started many times and they will then run in parallel in separate processes

If you need more cameras, you can simply copy & paste & edit to extend according to your needs. To make things more "intelligent", you can of course spend more time to write a more "clever" flow and script. But is it worth the effort?

What you always need to do anyway is to let the cameras stream images and collect them in proper order for each camera and then run the script accordingly for each movie file you then assemble

Hi @krambriw,

Thanks for your reply! I know that it's just one single Python script that is started multiple times, but that's the thing I'm a bit concerned about. Coz scripts doing video conversion usually have a pretty high cpu load, right? And now there'll be a 200 of them. So, I'm a bit worried this will overload the system, more so because Python isn't as efficient a language as node js.

But if you think that won't be too much of a problem, I'll indeed make the flows a bit more intelligent, so that it can read all camera's in one or two flows, that's a bit cleaner than copying everything like 200 times. Probably by passing a device id or something, I'll figure that.

Hopefully I clarified my thoughts a bit!

Kind regards,
Lennard

I'm not so convinced about that. Python supports multi-processing and multi-threading so you can run several "things" in parallel. Python also uses many ibraries written i C and C++ to get good performance on certain stuff

If you only want to use one instance of the running script you have to "feed it" with proper order of frames in serial for each camera. So somewhere you need to buffer the images before they are processed into video clips

Thanks for your quick reply and your reassuring words :wink:.
I'll give it a go and keep you guys posted!

Just an additional comment. The Python script I published above is really just an example. When you start to buffer frames from multiple cameras a re-design is needed. You have to put the actual avi creation function in a separate thread due to the load it is causing. I had it myself in the same mqtt thread and it was then blocked until the avi was finished. During that time, the script was not able to receive any additional mqtt messages

So now when I have separated them, the avi creation is running in a separate thread and new images can arrive via mqtt and be put into queues for upcoming avi creations. How you would do this in a single threaded javascript is, I guess, a nice challenge

I added the avi creation feature to my existing Python AI script (using YOLO for people detection) and the result is really funny to watch. During the time my home alarm system is armed, my video cameras are capturing images when motion is detected at fps=10 and sent to the YOLO analyser. All images are put into a Python queue. When the alarm system is unarmed, the avi creation starts. Playing the resulting "video" shows a mix of images from my cameras with and without people detection notifications in a sequential order "as they happened". This morning a video was created without any problems from a total of 2505 stored images (1024x768). Playing it gives a nice summary of what happened around our house during the night

Next step: I wanted a more convenient way of playing the recordings directly from the browser, in my case Chrome, instead of opening them in VLC. So I decided to convert the .avi files to .mp4 & thumbnails and to use the very nice feature "Single File PHP Gallery 4.11.0" by Kenny Svalgaard

In principle, the stream of images ending up in .mp4 recordings looks like this:

cameras > images > analyze & buffering in Jetson Nano > .avi stored on my server > .mp4 created

I guess everything could be done in one single computer but in my case, a bit lazy, I distributed tasks to multiple computers where I had the proper software installed, letting them communicate the actions via MQTT. The .mp4 files are finally converted in a RPi4 in order to utilize the GPU

The result

The folder of the video files (just a few for the time being)

The Python script, avi2mp4.py, in the RPi4 that converts the .avi files (in my case I let the script delete the .avi once the .mp4 was created ). All my .avi, .mp4 and .mp4.jpg files are stored in a mounted network SSD disk folder so you will have to modify the workdir according to your specific needs and setup. Please also note that to be able to use the GPU in the RPi, you have to have ffmpeg built to support h264_omx (the script will also work fine if you replace h264_omx with libx264 in the command but then the CPU will be used)

# example: python3 avi2mp4.py

import os
import subprocess
import time

workdir = "/home/pi/DRIVE/Avis"


def makeMP4():
    lst = os.listdir(workdir)
    for f in lst:
        path = os.path.join(workdir, f)
        #print (path)
        path2 = path.split('.')[0]
        #print (path2)
       
        if os.path.isfile(path) and '.avi' in f or '.mp4' in f:
            if f.split('.')[0]+'.mp4' not in lst:
                #print ('not found')
                print (f)
                command = 'ffmpeg -hide_banner -loglevel error -i '+path+' -c:v h264_omx -vb 20M -r 10 -c:a copy '+path2+'.mp4 '    
                process = subprocess.Popen(command, shell=True)
                process.wait()
                print (process.returncode)
#                command = 'rm '+path
#                process = subprocess.Popen(command, shell=True)
#                process.wait()
#                print (process.returncode)

            if f.split('.')[0]+'.mp4.jpg' not in lst:
                command = 'ffmpeg -hide_banner -loglevel error -i '+path2+'.mp4 -f image2 -vframes 1 '+path2+'.mp4.jpg '    
                process = subprocess.Popen(command, shell=True)
                process.wait()
                print (process.returncode)


makeMP4()
1 Like

Hi @krambriw,

That's really cool! It seems a bit complex though, wouldn't it be possible to convert the jpegs to mp4 immediately, without converting it to avi first? I got the script working btw and I'm well on my way to adapting it to my flows. The more I'm playing around with it, the more I'm loving it! :raised_hands:

One more thing I was wondering about is why you're not using the Python script immediately. The node-red now uses xterm: export DISPLAY=:0 && uxterm -e python3 (...), what's the advantage of that?

I think I did it like that since the first file .avi was created with OpenCV (Python cv2) and I read somewhere that it is problem to use OpenCV to make a .mp4 file

But I do not know, have not tried

Maybe it works ok just to change the line to

    result = cv2.VideoWriter("clip.mp4", fourcc, 1.0, (704, 576))

Sure, this is just to make it running on the desktop so you can study the progress. Otherwise, if you know everything is working and no need to watch, you are right, you can run it directly

Is it because there are two scripts involved? If so you can merge them together, is easy. So you could have a single script that does it all, but as I wrote, I did the final step (.avi -> .mp4) in a separate computer to utilize the GPU:s, that was the only reason I made two scripts

Unfortunately, using OpenCV, it seems the combination of codec and file extension is tricky. You can read about it in the article below. If you can figure out how to eliminate cv2 and let ffmpeg also do the initial step, .i.e. building the video from image frames stored in the buffer, then we would not need to save to .avi first and convert to .mp4. Creating the thumbnails will be needed regardless if you like to have them

Actually, it seems that if you also change this line to the following, it works!!

                fourcc = cv2.VideoWriter_fourcc(*'mp4v')

I do get .mp4 files directly. So the only thing needed is the to add that routine that creates thumbnails

EDIT: But it doesn't play in my browser :frowning:, only in VLC. The .mp4 files I generated with ffmpeg plays well both in VLC and the browser, don't know why this is so

EDIT: After changing to this:

fourcc = cv2.VideoWriter_fourcc(*'avc1')

the resulting .mp4 also plays fine in the browser (Chrome)

EDIT: If you want a thumbnail, you can also use cv2.imwrite() to capture the first image in the buffer. Like

cv2.imwrite('name.mp4.jpg', img)

1 Like

I was working on a flow that successfully receives jpegs and outputs an mp4 video in realtime, but then realized you wrote:

My current flow would not be very scalable because it would need 1 flow setup per nest. This could ultimately lead to the node-red editor taking long to load since it will have way too much extra data to consume to load all of the nodes on screen. The most scalable option seems to do it in a batch mode, such as you are doing by saving all the jpegs and then later you could scan the directories where the jpegs are saved and run ffmpeg to create mp4 videos from that.

ffmpeg docs about image2 demuxer

That could be achieved using ffmpeg with the following:

[
    "-hide_banner",
    "-nostats",
    "-loglevel",
    "+level+error",
    "-framerate",
    15,
    "-f",
    "image2",
    "-c:v",
    "mjpeg",
    "-pattern_type",
    "glob",
    "-i",
    "/home/cctv/front_corner_sub/*.jpeg",
    "-an",
    "-c:v",
    "libx264",
    "-f",
    "mp4",
    "-movflags",
    "+faststart",
    "-g",
    "15",
    "-crf",
    "20",
    "-framerate",
    "15",
    "-vf",
    "fps=fps=15",
    "-frames:v",
    150,
    "/home/cctv/front_corner_sub/out.mp4"
]

This assumes that all of the jpegs will be in the same directory and end with .jpeg and it puts the out.mp4 into the same directory. I have tried this and it works, but the results don't seem that great when I play it back, but that might be expected, not sure. Maybe you will have satisfactory results.

The quality can be adjusted with the crf value. 0 (best) to 51 (worst).

The frames:v 150 might not be necessary, but that is just to limit the output and automatically exit if you exceed 150, not reach 150.

The vf fps 15 and framerate 15 are not both necessary, but you will need at least one of these on the output. Even though they seem like they would do the same thing, the results for me seemed different. Maybe you could try each separately to see which one generates a smoother video.

Perhaps you saved all of the jpegs in sequential order:

.
As an alternative to the glob pattern, you could also use sequence and include a start number if you wanted to start later than image0000.jpeg:

[
    "-hide_banner",
    "-nostats",
    "-loglevel",
    "+level+error",
    "-framerate",
    15,
    "-f",
    "image2",
    "-c:v",
    "mjpeg",
    "-pattern_type",
    "sequence",
    "-start_number",
    123,
    "-i",
    "/home/cctv/front_corner_sub/image%04d.jpeg",
    "-an",
    "-c:v",
    "libx264",
    "-f",
    "mp4",
    "-movflags",
    "+faststart",
    "-g",
    "15",
    "-crf",
    "20",
    "-framerate",
    "15",
    "-vf",
    "fps=fps=15",
    "-frames:v",
    150,
    "/home/cctv/front_corner_sub/out10.mp4"
]