How to display CCTV camera in dashboard (RTSP)

Try it reading from an mp4 file rather than the stream to see if it is a problem with the stream or a more fundamental issue.

Hey Colin (@Colin),

Thanks for the tip. Again a 'bit' closer to the thruth ...

I have downloaded two Big Buck Bunny videos from there download site:

image

  • The .mov file gives me the same error as with the RTSP stream, although it is h264 (as stated in the file name). So I assume the hardware decoder cannot handle my RTSP stream for a similar reason. But I still don't know why this h264 doesn't work ...

  • The .mp4 file works fine. I get a nice speed of average 115 frames per second in two separate tests: one test with hardware acceleration (i.e. parameter ' -vodec h264_mmal ' added) and one without hardware acceleration (without that parameter). But I don't really see a difference in overall CPU performance between both tests:

    Perhaps Ffmpeg always uses hardware acceleration by default. Or it does perhaps never use the hardware decoder, although I'm not sure whether my Raspberry can decode 115 fps (resolution 320x180) via software. Will have to do the same performance test on another pi (on which I haven't paid for hardware acceleration), to find this out.

As far as i know you do not need to buy the mpeg2 license as long as you do not use mpeg2 videos. mpeg4 - which is h264 - should work out of the box.
As i was playing around with videos on a raspberry i came to the conclusion that vlc does not use hardware acceleration but omxplayer does. It uses the OpenMAX library.
Your ffmpeg compiling options "--enable-omx" may also point to OpenMAX.
But here my knowledge ends :frowning:

2 Likes

Hi @Jaxn, I have tested the same flow on a Raspberry where I haven't a license key and the performance is equal. So let's summarize:

  • Like you say the license key is only required for MPEG2. I shouldn't have seen that when I ordered it from their site...

  • For playing h264 via hardware acceleration, ffmpeg needs to build with appropriate parameters.

  • All tutorials (like e.g. this one) say you need to specify -vodec h264_mmal to instruct ffmpeg that it should use hardware acceleration. That is also explained in the official docs for external decoders. However I don't see any difference in performance when I specify that parameter. And without that parameter I can also play the .mov file without problems ...

  • And then there is the "-hwaccel" parameter for internal decoders, but I don't get that one working for some reason:

    ffmpeg -hwaccel -i "/home/pi/.node-red/BigBuckBunny_320x180.mp4" -f image2pipe pipe:1
    

    Results in "Option hwaccel (use HW accelerated decoding) cannot be applied to output url /home/pi/.node-red/BigBuckBunny_320x180.mp4 -- you are trying to apply an input option to an output file" ...

So I don't know now if it is playing via hardware or software.

P.S. For those using VLC: via Tools-->Messages you can set the verbose level to 'debug' and then you can see if underneath avcodec (i.e. ffmpeg) uses hardware acceleration. Example on my portable:

image

Replying to an old post in this thread, but I think its relevant.

What rtsp sources are you capturing from?

With opencv rtsp stream capture (cv 3.3.0 and 3.4.2) I'm finding issues with many rtsp sources, no difference between the cv versions (on three different Linux systems).

errors/warnings like:

Invalid UE golomb code
[h264 @ 0x30f6460] mb_type 43 in P slice too large at 37 68
[h264 @ 0x30f6460] error while decoding MB 37 68

Running your command on one of my Onvif cameras on my i7 Ubuntu 16.04 desktop:

avconv -i rtsp://admin:admin@192.168.2.221/media/video1 -frames 1 -qscale 1 -f image2 /home/wally/image.jpg

ffmpeg version 2.8.15-0ubuntu0.16.04.1 Copyright (c) 2000-2018 the FFmpeg developers
built with gcc 5.4.0 (Ubuntu 5.4.0-6ubuntu1~16.04.10) 20160609
configuration: --prefix=/usr --extra-version=0ubuntu0.16.04.1 --build-suffix=-ffmpeg --toolchain=hardened --libdir=/usr/lib/x86_64-linux-gnu --incdir=/usr/include/x86_64-linux-gnu --cc=cc --cxx=g++ --enable-gpl --enable-shared --disable-stripping --disable-decoder=libopenjpeg --disable-decoder=libschroedinger --enable-avresample --enable-avisynth --enable-gnutls --enable-ladspa --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libcdio --enable-libflite --enable-libfontconfig --enable-libfreetype --enable-libfribidi --enable-libgme --enable-libgsm --enable-libmodplug --enable-libmp3lame --enable-libopenjpeg --enable-libopus --enable-libpulse --enable-librtmp --enable-libschroedinger --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libssh --enable-libtheora --enable-libtwolame --enable-libvorbis --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx265 --enable-libxvid --enable-libzvbi --enable-openal --enable-opengl --enable-x11grab --enable-libdc1394 --enable-libiec61883 --enable-libzmq --enable-frei0r --enable-libx264 --enable-libopencv
WARNING: library configuration mismatch
avcodec configuration: --prefix=/usr --extra-version=0ubuntu0.16.04.1 --build-suffix=-ffmpeg --toolchain=hardened --libdir=/usr/lib/x86_64-linux-gnu --incdir=/usr/include/x86_64-linux-gnu --cc=cc --cxx=g++ --enable-gpl --enable-shared --disable-stripping --disable-decoder=libopenjpeg --disable-decoder=libschroedinger --enable-avresample --enable-avisynth --enable-gnutls --enable-ladspa --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libcdio --enable-libflite --enable-libfontconfig --enable-libfreetype --enable-libfribidi --enable-libgme --enable-libgsm --enable-libmodplug --enable-libmp3lame --enable-libopenjpeg --enable-libopus --enable-libpulse --enable-librtmp --enable-libschroedinger --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libssh --enable-libtheora --enable-libtwolame --enable-libvorbis --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx265 --enable-libxvid --enable-libzvbi --enable-openal --enable-opengl --enable-x11grab --enable-libdc1394 --enable-libiec61883 --enable-libzmq --enable-frei0r --enable-libx264 --enable-libopencv --enable-version3 --disable-doc --disable-programs --disable-avdevice --disable-avfilter --disable-avformat --disable-avresample --disable-postproc --disable-swscale --enable-libopencore_amrnb --enable-libopencore_amrwb --enable-libvo_aacenc --enable-libvo_amrwbenc
libavutil 54. 31.100 / 54. 31.100
libavcodec 56. 60.100 / 56. 60.100
libavformat 56. 40.101 / 56. 40.101
libavdevice 56. 4.100 / 56. 4.100
libavfilter 5. 40.101 / 5. 40.101
libavresample 2. 1. 0 / 2. 1. 0
libswscale 3. 1.101 / 3. 1.101
libswresample 1. 2.101 / 1. 2.101
libpostproc 53. 3.100 / 53. 3.100
Invalid UE golomb code
Last message repeated 1 times
Input #0, rtsp, from 'rtsp://admin:admin@192.168.2.221/media/video1':
Metadata:
title : VCP IPC Realtime stream
Duration: N/A, start: 0.000000, bitrate: N/A
Stream #0:0: Video: h264 (High), yuv420p, 1280x720, 30 fps, 30 tbr, 90k tbn, 60 tbc
Stream #0:1: Data: none
Please use -q:a or -q:v, -qscale is ambiguous
[swscaler @ 0x17c7980] deprecated pixel format used, make sure you did set range correctly
Output #0, image2, to '/home/wally/image.jpg':
Metadata:
title : VCP IPC Realtime stream
encoder : Lavf56.40.101
Stream #0:0: Video: mjpeg, yuvj420p(pc), 1280x720, q=2-31, 200 kb/s, 30 fps, 30 tbn, 30 tbc
Metadata:
encoder : Lavc56.60.100 mjpeg
Stream mapping:
Stream #0:0 -> #0:0 (h264 (native) -> mjpeg (native))
Press [q] to stop, [?] for help
Invalid UE golomb code
frame= 1 fps=0.0 q=1.0 Lsize=N/A time=00:00:00.03 bitrate=N/A dup=1 drop=1
video:237kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: unknown

But it did capture an image.

To capture "snapshots" for my python AI I'm finding directly grabbing snapshots from the Onvif "snapshot http URL" works best for Onvif cameras.

For my Lorex DVR I'm finding pulling frames with openCV from rtsp streams works great, although the latency is several seconds. A net DVR I tried (Geovision SNVR0811) grabbing from its rtsp URL (rtsp://admin:admin@192.168.2.222:554/ch1) worked great until it detected motion then it locked up my code with one of those h264 errors. Using the camera directly (rtsp://admin:admin@192.168.2.221/media/video1) doesn't lock up my code, but the latency is horrible and some images have mosiac breakups especially with a lot of motion in the scene.

Right now I've three "front ends for my AI) ftp snapshots from a camera or DVR, image grabs from rtsp streams with openCV and http access from the Onvif snapshot URL (http://192.168.2.221:85/images/snapshot.jpg)

Which method works "best" seems to depend on the camera source and trying seems the only way to know :frowning:

I'm going through this thread again as I'm getting serious about getting some kind of web interface for my AI. The dashboard is working fine to mode setting etc. monitoring the cameras for setup (priority) or viewing (mostly useful for demos)

Playing your public rtsp stream in my opencv code I get ~25 fps. but is there a link with a more reasonable resolution like D1?

When I imported your flow into node-red it worked fine after I installed the image-output node. Is there some way to get the image-output node to display on a different page or is it confined to displaying in the node-red editor?

Pasting in the URL for one of my Lorex DVR channels it showed the same "image caching" issue I mentioned in this thread: dashboard caching

It seems to choke on the HD image although I can see flashes with smooth movement as a car happened to be driving by. This may be that UDP buffer size issue that has been mentioned earlier. I need to investigate this image-output node more thoroughly.

Changing the rtsp URL to use a substream from my Lorex DVR it plays fine but still is "blocky" compared to when viewed in my openCV test code. I think its like 352x240. The Lorex streams this to my phone fine with their "app" but its resolution is not good enough to focus the cameras.

Nope, it is purely designed to display in the Node-RED flow editor. It just adds some svg content to the svg of the flow editor.

I'm going to create something similar for the dashboard, to have some PTZ controls on top of the camera view. I started to create a repository already last weekend, but it contains nothing useful yet!!!! Just need some free time to spend on it :woozy_face:

Correct me if I'm mistaken but it seems to me that this command writes all images to a temporary file?? If so, I would suggest to use the stdout stream (via image2pipe) to keep everything in memory only.

If you look at the code of the image-output node, you will see that it uses RED.comms.publish to push the image via a websocket channel to the flow editor. I have very bad experiences with pushing lots of data through websockets. Never had a look at that UDP buffer size, but if you kind find a solution, please be my guest !!!

I try to avoid pushing image streams over websockets, since that freezes my user interface entirely. I have described this here, and in the section after that you can see how I avoid websockets by using my node-red-contrib-multipart-stream-encoder node ...

Is there any reason why you are using OpenCv to stream RTSP? From this explanation, I understand that OpenCv uses FFmpeg underneath to accomplish that. I would expect it to be faster to call FFmpeg directly from Node-RED, but don't hesitate to correct me if I'm wrong!

The reason here is simple. Its done in the Python code that sends the images to the AI which uses the Movidius NCS python API so using opencv-python features are a natural. For the ftp front end, node-red implements the server and passes the file as a buffer to the Python code via MQTT. For the http jpeg snapshots I use the Python requests module.

I'm willing to bet there is not a "simpler" way to view multiple rtsp streams that this openCV Python snippet:

windowName = list() # flags removes toolbar and status bar from window
for i in range(Nrtsp):
windowName.append('rtsp_' + str(i))
cv2.namedWindow(windowName[i], flags=cv2.WINDOW_GUI_NORMAL + cv2.WINDOW_AUTOSIZE) # this will fail for CV without QT for highgui

Rcap = list()
for i in range(Nrtsp):
Rcap.append(cv2.VideoCapture(rtspURL[i]))
Rcap[i].set(cv2.CAP_PROP_BUFFERSIZE, 2) # doesn't throw error or warning in python3, but not sure it is actually honored

while(1):
try:
for i in range(Nrtsp):
ret, frame = Rcap[i].read()
##cv2.imshow(windowName[i],cv2.resize(frame, (640, 360)))
cv2.imshow(windowName[i], frame)
fps.update() # update the FPS counter
key = cv2.waitKey(1) & 0xFF
if key == ord("q"): # if the q key was pressed, break from the loop
break
except KeyboardInterrupt:
break
except Exception as e:
print('EXCEPTION! ' + str(e))
break

It handles 15 1920x1080 rtsp streams resized to 640x360 for live viewing on my i7 Desktop while I'm doing everything normal -- email about 30 browser tabs open on six virtual desktops, Email pandora, etc.

As to the avconv command I just copied it from the much earlier post in this thread and changed the URL to be one of my 720p Onvif cameras to illustrate there be deamons in rtsp-land. No problems on one system with one (brand) of camera is a long way from being a general solution.

VLC is the closest we have to a general solution -- its about the only "3rd party" rtsp viewers most of the network DVR vendors will offer any support for, but most use some lame unsigned activeX control limiting you to using Internet Explore for there web viewers. I'd love to find a way to use the "guts" of VLC, but last time I looked they made ffmpeg look like the king of clear documentation!

The speed looks impressive ...

Its even more so when you see it in person with all the other activity on the system. Its a second from the top of the line i7 computer purchased when I retired in July 2013, maxed out with 64GB RAM along with the Lorex DVR system.

I naively expected to get "motion images" from the DVR, "AND" them with PIR motion sensor outputs to reduce the false alarm rate. Not even close, so I started looking into openCV face-detection stuff Harr Cascades, HOG, etc. all failed badly on security camera images and found false faces in the shadows and bushes.

Then I discovered AI, Darknet YOLO was great but took 18 seconds to process a frame on this i7 (I don't have CUDA installed which it highly recommends). Eventually this led me to the Movidius NCS and Moblenet-SSD AI whcih has worked very well.

@wb666greene: I'm very interested to verify whether the my above flow (based on FFmpeg) is a decent solution for RTSP streaming or why not. It is not really clear to me why you say it is "a long way from being a general solution"? Do you think it cannot handle your 15 1920x1080 rtsp streams at a similar speed (on the same hardware)? Because if OpenCv uses the same FFmpeg library under the hood, I would expect it to run at a similar speed. Or are there other 'specific' reasons why you don't like the above flow?

I'm a bit confused here, if you are talking about the flow in your image I quoted (the one using the big buck bunny public link) with the ffmpeg command in an exec node, it worked fine but that is a very small image, It seemed to work with my Lorex rtsp stream, although the HD image choked the image-output node.

My statement about not being a general solution is based on the errors/warnings thrown by the various rtsp stream reading methods.

I've come to the conclusion that this is an interaction with the stream sources (cameras, dvrs etc) and the stream URL reader. It seems if you can view the stream for a few minutes in some version of VLC, the straight from China camera makers deem that "it works".

I will try two of my Onvif rtsp URLs in this flow, I'll put a debug node on the exec stderr and see what happens and follow-up. Both are HD (720p, 1080p) so I know the image output will "choke". I can try duplicating it and running both streams at once.

Why do you think I don't like the flow? Its nice to have options, I'm here more for getting a good display from node red in a web page (dashboard or whatever), learning about other input possibilities is welcome for potential future projects, but using openCV Python bindings are pretty compelling for me at the moment.

If node red is running a single threaded event loop I'd expect problems with multiple streams, but if each exec node spawns a separate thread/process performace could be similar. Until we find a node-red image viewer with the performance similar to openCV highgui module it might be impossible to verify.

I didn't know about your big buck bunny public stream -- its great for these discussions/experiments, I had no luck with Google, but do you know of a public URL with higher resolution -- at least D1 (704x480)?

Since the AI input is resized ~300x300 (depending on the model) I've found D1 or 720p to be about the optimum, the 1080p images miss people walking by in the upper third of the frame (not really a problem for my purposes), but the 1080p images also offer an option to crop the image which can keep me off ladders to adjust the cameras (why PTZ would rock if the cameras weren't 4X+ more expensive that what I have now, PT is of course acceptable for fixed lens cameras)

Seems I didn't interpreted your explanation correctly. Don't forget I'm not native speaking English ...

That would be very nice, to get a better idea whether it is useful in practice!!!

Don't think we will ever be able to do something similar. Creating a browser viewer that is as fast as a C++ gui. But you never know ...

Unfortunately I didn't find anything. At their higher resolutions page, I don't see any rtsp stream ... That would have indeed have been nice, to compare different setups (opencv/ffmpeg/motion/...). But my Raspberry pi model 3 would certainly start melting with your experiments :wink:

1 Like

I'm now playing a bit with your ffmpeg exec flow with multiple streams. Main problem is my Lorex DVR sub-streams are CIF (352x240) only a bit bigger than your Big Buck Bunny public rtsp stream. the HD streams "choke" the Image Output node resulting in displaying mostly garbage.

I took a cell phone video of my i7 Desktop running 9 HD rtsp streams into my Python AI and my node-red "controller" dashboard that displays a single selected channel for testing and/or camera positioning.

Quality is not great but watching the time stamped file names update in the node-red UI and the pool cleaner moving in third 3rd window from the top, adjacent to the browser window, gives a good idea of how it runs. The AI was processing an aggregate of ~28.3 fps using 1 Movidius NCS thread + 1 CPU AI Thread, + 9 threads to process each rtsp stream, and the main thread to draw the displays and send the MQTT messages to Node-Red.

Python AI and Node-red Dashboard Controller in action.

I need to get one of those "tripod adapters" for my phone -- results, while quick and dirty but useful, would be a lot better without the camera movement and resulting focus jitter.

I opened a "ticket" with Lorex tech support asking why the D1 sub-stream settings revert to CID shortly after applying. They've never bothered to reply before to any of my questions so I don't expect any solution.

D1 video is perfectly fine for the AI, actually I expect CIF would work too since the AI is 300x300 pixels, but a CIF image is inadequate for viewing in Email to make a decision about the AI notification (UPS guy or Criminal at my door) -- that's where the 720p and 1080p really have the advantage.

I'll try to setup 8 of my CIF streams and your BBB stream and post another short video clip.

This morning I setup a fan-less 12V powered dual cor i7 "NUC-like" computer, the AI code on it with 9 HD rtsp threads and the same 1 NCS + 1 CPU thread is giving ~18.4 fps. I've also got an i5 version of it I plan to try soon. The i3 version of it gave ~12.6 fps with 6 HD rtsp streams -- my feeling is fps/Ncameras ~ 2 is about ideal give or take a few cameras :slight_smile:

Next priority is moving the code to OpenVINO so I can compare NCS and NCS2 sticks.

1 Like

Hi @nygma2004, thanks for linking me to your video. Was really good. I tried to run the above in the exec node, and I could see the image served up by Node-RED. However unlike yours, my exec node didn't finish running. Also when I triggered it again, it fails. When I delete the file manually, it re-creates it with a new image. Any ideas?

This reminded me I never did post a short video clip of the 8 Lorex DVR CIF rtsp streams playing along with the "public" BigBuckBunny rtsp stream.
9 node-red rtsp streams

Sorry its night,t but you can see from the passing headlights that it plays well. I think my CIF rtsp streams from the Lorex DVR are 5FPS

Morning @wb666greene,
thanks for the nice example!

I was still thinking you had choking images in the output node, which seemed natural to me since all data has to travel through that single poor websocket channel ...
But in your youtube video it all 'seems' to be running smooth. Did you do anything special to solve it?

And second question: can I conclude from your test that rtsp via ffmpeg in Node-RED is 'usable' ??

Very nice & smooth looking!
Just thought, wouldn't it be great to have a dashboard node that could do the same as the "Image output" node does? But as a dashboard component. Only missing thing to make it really simple to create a nice dashboard with multiple video streams

Hey Walter (@krambriw), I'm developing something at the moment. But have some layout issues that I need to figure out...

No I just cloned your BigBuckBunny flow and changed the URL to my Lorex DVR sub-streams. The main "trick" is to have enough CPU as each exec/spawn creates a new process (aka heavyweight thread). What "fixes" the ImageOutput is most likely that the sub-streams are CIF (352x240) in size, barely larger that the BigBuckBunny stream.

I certainly agree with Walter (@krambriw) that having something like ImageOutput displaying on the dashboard would be useful.

I'm using this (derived from other samples/examples on this forum):

[{"id":"d5d6c456.ea1f48","type":"template","z":"17a1aaab.ebd6c5","name":"","field":"payload","fieldType":"msg","format":"handlebars","syntax":"mustache","template":"<img width=\"640px\" height=\"360px\" src=\"data:image/jpg;base64,{{{payload}}}\">","output":"str","x":865,"y":760,"wires":[["dcaef4d7.2e7b98"]]},{"id":"dcaef4d7.2e7b98","type":"ui_template","z":"17a1aaab.ebd6c5","group":"655d6147.d4f78","name":"Viewer","order":2,"width":"13","height":"7","format":"<div ng-bind-html=\"msg.payload\"></div>","storeOutMessages":false,"fwdInMessages":true,"templateScope":"local","x":1005,"y":760,"wires":[["329b55df.ab1b4a"]]},{"id":"d1fd275f.885ab8","type":"base64","z":"17a1aaab.ebd6c5","name":"","action":"str","property":"payload","x":725,"y":760,"wires":[["d5d6c456.ea1f48"]]},{"id":"52ff5600.979d7c","type":"change","z":"17a1aaab.ebd6c5","name":"","rules":[{"t":"move","p":"payload","pt":"msg","to":"msg.filename","tot":"msg"}],"action":"","property":"","from":"","to":"","reg":false,"x":425,"y":760,"wires":[["2bee7152.ac964e","38973a95.ddd966"]]},{"id":"2bee7152.ac964e","type":"file in","z":"17a1aaab.ebd6c5","name":"","filename":"","format":"","chunk":false,"sendError":false,"x":595,"y":760,"wires":[["d1fd275f.885ab8"]]},{"id":"f61b911e.1ef6d","type":"mqtt in","z":"17a1aaab.ebd6c5","name":"Idle Image","topic":"IdleImage","qos":"0","broker":"70fa8cd9.0a1224","x":100,"y":760,"wires":[["89cc3f0c.e7e75","52ff5600.979d7c"]]},{"id":"329b55df.ab1b4a","type":"file","z":"17a1aaab.ebd6c5","name":"","filename":"","appendNewline":true,"createDir":false,"overwriteFile":"delete","x":1135,"y":760,"wires":[[]]},{"id":"89cc3f0c.e7e75","type":"ui_button","z":"17a1aaab.ebd6c5","name":"filename","group":"655d6147.d4f78","order":1,"width":"12","height":"1","passthru":true,"label":"{{msg.payload}}","tooltip":"","color":"#101010","bgcolor":"#83ed7b","icon":"","payload":"","payloadType":"str","topic":"","x":260,"y":815,"wires":[[]]},{"id":"38973a95.ddd966","type":"debug","z":"17a1aaab.ebd6c5","name":"","active":false,"tosidebar":true,"console":false,"tostatus":false,"complete":"false","x":605,"y":700,"wires":[]},{"id":"a55ff06e.293d","type":"ui_dropdown","z":"17a1aaab.ebd6c5","name":"Set UI View","label":"","tooltip":"Enable/Disable Camera Live View","place":"Camera Viewing","group":"b63aa4a5.08af18","order":1,"width":0,"height":0,"passthru":true,"options":[{"label":"Enable","value":"1","type":"str"},{"label":"Disable ","value":"0","type":"str"}],"payload":"","topic":"Alarm/UImode","x":110,"y":300,"wires":[["52fbe3fe.50182c"]]},{"id":"52fbe3fe.50182c","type":"mqtt out","z":"17a1aaab.ebd6c5","name":"Set UI Mode","topic":"Alarm/UImode","qos":"2","retain":"true","broker":"70fa8cd9.0a1224","x":290,"y":300,"wires":[]},{"id":"416ad6ce.a927f8","type":"ui_dropdown","z":"17a1aaab.ebd6c5","name":"Select Camera","label":"","tooltip":"Enable/Disable Camera Live View","place":"Select Camera","group":"b63aa4a5.08af18","order":1,"width":0,"height":0,"passthru":true,"options":[{"label":"Camera 0","value":"0","type":"str"},{"label":"Camera 1","value":"1","type":"str"},{"label":"Camera 2","value":"2","type":"str"},{"label":"Camera 3","value":"3","type":"str"},{"label":"Camera 4","value":"4","type":"str"},{"label":"Camera 5","value":"5","type":"str"},{"label":"Camera 6","value":"6","type":"str"},{"label":"Camera7","value":"7","type":"str"}],"payload":"","topic":"ViewCamera","x":120,"y":375,"wires":[["9f8dc948.7ab3f8"]]},{"id":"9f8dc948.7ab3f8","type":"mqtt out","z":"17a1aaab.ebd6c5","name":"Set Camera","topic":"Alarm/ViewCamera","qos":"2","retain":"true","broker":"70fa8cd9.0a1224","x":290,"y":375,"wires":[]},{"id":"655d6147.d4f78","type":"ui_group","z":"","name":"Camera Viewer","tab":"6e5fd6b6.518e98","order":2,"disp":true,"width":"13","collapse":false},{"id":"70fa8cd9.0a1224","type":"mqtt-broker","z":null,"name":"localhost:1883","broker":"localhost","port":"1883","clientid":"","usetls":false,"compatmode":true,"keepalive":"60","cleansession":true,"birthTopic":"","birthQos":"0","birthRetain":"false","birthPayload":"","closeTopic":"","closePayload":"","willTopic":"","willQos":"0","willRetain":"false","willPayload":""},{"id":"b63aa4a5.08af18","type":"ui_group","z":"","name":" AI Mode","tab":"6e5fd6b6.518e98","order":1,"disp":true,"width":"4","collapse":false},{"id":"6e5fd6b6.518e98","type":"ui_tab","z":"","name":"AI Controller","icon":"dashboard","disabled":false,"hidden":false}]

to select one of 8 cameras to be displayed on the dashboard. It works well for limited usage, but if left displaying for long enough the connection drops or the entire webpage becomes very sluggish. Initially I made a version of it that shows all 8-cameras (4x2), but this chokes very quickly so I made it display only one selected camera.

I'm using files instead of buffers as my initial design runs the node-red on the AI host and a localhost MQTT broker with MQTT from another broker to set the Idle/audio/notify modes of operation.

Once again Walter (@krambriw) has given me the key bit of information so I can pass buffers via MQTT from my Python AI instead of just filenames allowing a more "distributed" system. I'm currently working on doing this.