By resizing the images with ffmpeg -s WxH
option, Its looking like 65K is the maximum chunk size. Images below this seem to arrive correctly, images larger seem truncated.
The beach images are like 14K and they work wondefully for me too. I'm going to compare some jpeg image frames with the frames I get reading the rtsp stream with OpenCV. Unfortunately I won't get very far until Tuesday as this is Memorial Day Holiday weekend in the US.
Its nice to expose these limitations even if there is nothing can be done to work around so I have to give up on this approach for an rtsp stream to MQTT buffer stream converter. The node-red would be very elegant if it could be made to work with other than "smallish" output images.
Rezizing my HD security system mp4 image frames to under 65K loses a lot of quality -- makes little difference on the AI person detection, but matters a lot for the "human in the loop" friend or foe determination.
Thanks for the information here. I have got motion up and running and I can browse to my camera via the pi ip address and the port I have supplied it through the config file. Can you tell me how you got the image on your node red dashboard please?
I was just wondering, can you help me get the notification of movement?
I have an IP camera in my dash and i want to get a notification in my dash if it identifies any movement (Not in the DNN Analyzewr leevel yet). I have a high sensitivity setup in Motion app.
Hi,
In Motion there are several possibilities to achieve this. One that I use myself is the configuration parameter "on_picture_save". Another is "on_motion_detected" and this is the one I use in this simple example
In Motion configuration, select the camera and list the configuration parameters. You will/should find this depending on your Motion version
I have entered the full path to a simple Python script that I have in my RPi's /home/pi directory
The Python script is as an example and can be much more advanced. When Motion is set to start detection & senses motion above the configured "Threshold", it executes the script that simply connects to your mqtt broker and publishes the event. The event is captured by NR where all the other processing can be made
With this example you will most likely get many events when Motion triggers and it will require you to configure the filters and mask to avoid too many. You can also do configurations in NR to filter out, like using the RBE node and others. As stated, this example is just a starter to make it happen...
Here is the Python script. You will have to install mosquitto as broker and paho for Python if you do not have them already. You also have to change the ip & port to fit with your local setup
#!/usr/bin/python
# USAGE
# python on_motion_detected.py
import time
import paho.mqtt.client as mqtt
def send_mqtt_message(msg):
client = mqtt.Mosquitto()
client.connect(mqtt_host, port)
result, mid = client.publish(topic_out, msg, 0)
time.sleep(1)
client.disconnect()
del client
# Main starts here -----------------------------------------------------------
#MQTT settings
mqtt_host = '192.168.0.241'
port = 1883
topic_out = 'motion'
send_mqtt_message('Motion detected!!!')
I already tried your flow.
I was wondering what setup you have im motion.
I have basic authentication on my stream URL. And localhost is on.
Even though I tried changing it and still didn't work. I get a "broken file" icon. Any thoughts?
stream_localhost needs to be off to allow viewing cameras from other computers. Have you tried to view a camera directly in a browser from a computer on your network? Like http://192.168.0.236:8081 but with ip that fits your settings?
Now I have two other problems:
1- When i open the dash on my iPhone (using safari) I can't see the livestream;
2- My config is getting almost one minute delay (with motion detection off!).
Any thoughts?
1- I have a VPN set on my rasp. I can view the dash, can't load the stream image only...... Do your flow display images on iOS?
2- I'm gonna try it ASAP
Yes, works fine on my iOS devices, both with Safari & Chrome. Working fine with wifi directly to my home network or via 4G and VPN (I use OpenVPN on my iPhone)
Eventually, if you have an ip-camera, you will have to reduce the frame rate in the camera setup. If you have a high frame rate, it will take long time until all frames have been processed, they are all placed in a queue typ FIFO
1- Ankward.... I'm using the same node as you are.... Read on a post that iOS safari couldn't load html pictures bigger than a certain size.... What's your cam size config in motion?
Gonna try to solve this...
Just 640x480. Also tried with 1024x768 and iPhone via VPN, still works fine in Safari & Chrome. On my iPhone, iPad, MacBook, no problems
When you configure camera resolution:
What you configure in camera will be what the camera send
What you configure in dashboard template will decide the scaling
So for instance, you configure the camera to 1024x768 but in dashboard template you have width: 405px;
Then the 1024x768 frame size will be scaled to fit into the element with the width of 405 pixels