MQTT buffering?

I apologize that this is not strictly a node-red issue, but I know there are some MQTT experts here and I got no response when I asked on the MQTT google groups last week.

It appears MQTT is buffering or queuing qos 0 messages. I'm sending jpeg images as MQTT qos 0 messages.

Running on my i7 systems everything works just fine and I'm able to process 50-60 images per second from 15 cameras, 75 fps would be every frame from every camera. But when I go to IOT type system (Pi4B) I can get ~20 fps from 5 Onvif snapshot cameras but this drops to ~9 fps processing the MQTT buffers and latencies grow to a large value 30-40 seconds. Clearly the qos 0 messages are not being dropped but are being queued instead. I thought qos 0 was "fire and forget" but it seems they are being buffered. Is there any way to turn off this buffering? I see various queue settings in the mosquitto config file but they only apply to qos > 0 messages according to the documentation I've found.

I can see the buffering as if I walk in front of the cameras I see myself doing it about 30 seconds later and its as "smooth" as I'd expect form the net frame rate (9 fps for 4 cameras is about 2 fps per camera).

I've also noticed this when using the dashboard trying to display "live" images for camera setup,

Are the publisher, subscriber and server all running in the same machine?
[Edit] Also where is the browser running?

Kind of interested in the same Q's as well. This article is maybe of interest:
https://www.hivemq.com/blog/are-your-mqtt-applications-resilient-enough/

" It’s important to note, that some MQTT libraries already provide some of the client-side counter-measures, so you should check if your library already supports that."

In this case when we have a on-line situation I believe the buffering could be in the MQTT message queuing, in your network or in your applications. Maybe in all "layers"

I have noticed myself that using a high fram rate for an ip-camera gives much higher latency then a lower frame rate. And this is without any MQTT involvement

EDIT this answer states there is no message queue in a mqtt broker

And this bit could explain the queuing your seeing

image

QoS 0 fire and forget is only at the application level. The underlying network connection is still TCP which acks every packet to handle in order reassembly of the frames etc. So it may indeed be backed up in the network stack.

Or somewhere else in node-red or the browser

Broker message q's require persistent sessions and QoS 1. By default, Node-RED creates a clean session on first connect if that makes a difference.

Just to be clear, Node-RED uses the MQTT.js client and not the Paho JavaScript client

Thanks. I think network TCP buffering is the most likely cause of what I'm observing. Now I need a plan B.

My design is "best effort", process images as fast as possible and discard the rest. MQTT seemed perfect and worked great on strong hardware. Falls over where I need it like on the Pi, Odroid, etc. that can't decode multiple rtsp streams.

I've used an i5 headless "miniPC" to make an rtsp to MQTT converter. Its close to pushing out the full 75 fps of 15 cameras at 5 fps each. I plan to drop my Security DVR to 3 fps for longer retention (which is currently only about 15 days) but I've kept it as it is for "stress testing".

I suspect I'l have to make it work more like Onvif snapshots where it sends the newest jpg image in response to an http request. The Pi4 can do ~20 fps for 4 or 5 Onvif cameras. Its just I've virtually no experience with http/html programming, so I was looking to avoid this learning curve, MQTT seemed a really slick solution.

All on the same machine works fine on the i7-6700K as does splitting things up on different machines until the subscriber is Pi/Odroid class. The browser is not really relevant here as its only for debug/monitoring, but I generally run it on the machine running the broker to confine its MQTT subscriptions to localhost.

If this matters my design is fundamentally flawed, but thanks to this discussion I suspect TCP buffering is the root cause.

I've tried another middleman process running on the Pi/Odroid in an attempt to swallow the images, again it works fine on the i7 and fails on the Pi/Odroid class machines. Basically it subscribes to the remote broker and publishes (echos) to localhost

So far, basically anything that can process the MQTT image buffer streams adequately has been capable of simply decoding the rtsp streams directly.

It would be if you were running it on the pi as it would use additional resources and the browser itself might be a significant load on the machine.
So the publisher and MQTT are still running on the powerful machine so that has ruled out getting the images to the MQTT server. So the queuing is happening in the pi. I suspect that the issue may not be in the network stack, and that the MQTT In node in the pi is getting all the messages and passing them on. The issue may be that the next nodes along the chain may not be able to process them fast enough so the queue builds up in node-red. You could put a delay node in rate limit mode and dropping intermediate messages after the MQTT In node and experiment to find the best rate. That would not be automatic though. Possibly you could use a variable rate (I think there was a thread about this a little while ago) and monitor the CPU load, adjusting the rate to keep the processor busy but not clogged up.