I have recently started building a UI in Dashboard 2, but I cannot for the life of me work out how to display a jpg.
I am experimenting with using a node called HikvisionUltimate to pull a jpg image from a camera, which arrives as msg.payload. I know it works because the node config allows you to pull a test image to confirm you've set it up correctly, and that's displayed in the config window.
Rather than write it to a file (which I learned about from searching yay!), I'd like to simply display it on the Dashboard. That feels like it should be an easier thing to do, since the data is already, like, there.
I've been searching and searching but it's difficult because so many search results are related to Dashboard 1. I get the sense that it's something to do with the Template widget thing, but I'm not yet at a skill level where it's obvious what I should do.
I'm the sort of learner who can look at a working example and extrapolate from there. Is there a thread or a tutorial anywhere, specific to Dashboard 2, that describes how to configure a template node in a very basic way to display a jpg?
The use case for me (and this is not what the question is about), is that if all the available physical sensor inputs indicate a human presence, pull a jpg and send it to the ai thingy to see if it's a human and not an animal. If it's a human, whack the jpg on screen, and wake me up with a voice prompt.
We're on a farm, and have been burgled four times this year. Security dudes take 20 minutes to drive out here, so I'm trying everything I can to get an early warning thing happening because I'm tired of this. I don't have money for new beams or hardware (farmer), so I'm trying to be smarter about using the technology I already have, hence Node-Red.
If it is a buffer object, you probably want to convert it to base64 before using either of the 2 linked solutions (there is a contrib node or you can use a function node)
If you have an old 8th generation (circa 2016) i3 or better with integrated graphics (many Celerons have the right version of the iGPU but it is harder to tell you can dedicate it to running my system.
It runs a MobilenetSSD-V2 AI "person detector" and then zooms in on that person and reruns the inference on a yolo8 model virtually eliminating false alarms. It is written in Python3 and uses node-red as the user interface. You can see a Youtube video of the previous version of it in action here: https://youtu.be/XZYyE_WsRLI
The previous version supports a much greater range of hardware but is frankly a nightmare to install, but a link to it is on the new site. The new version only supports Ubuntu 20.04 or 22.04 and an Intel "Haswell" (I believe) or better or a system with and NVidia GPU (GTX950 or better). Installation on a suitable Intel processor is significantly easier due to the somewhat difficult installation of CUDA.
I'm still working on the installation instructions for the GitHub, but if you are interested, raise an issue on GitHub and I'll send to the text file documenting the installation steps and try to help you get things going. The node-red and Python code should work on Windows but I've not tested it and you'll lose some of the house keeping functions that are done with bash shell scripts.
For just a quick look at an image from a camera take a look at: https://flows.nodered.org/node/node-red-contrib-image-output and search this form, there was a lot of discussion about various ways to do this here a few years ago when I was just starting out with node-red Dashboard. Bart Bertners (apologies if I got the spelling wrong) was one of the major contributors along with Steve-Mcl.
Edit: Note that there are many used, refurbished "business class" Intel iCore processor laptops (HP ProBooks, EliteBooks, Dells, etc) available starting at $120 or so which has made me lose interest in supporting the IOT class devices which quickly hit this price point by the time you get everything you need to use one.
Hi @Steve-Mcl, thank you for your help. I didn't know about the dashboard-2 tag, that's a huge help.
The very first link you pointed me to solved my problem.
As I now understand the template-ui node, the intent is for you to put your own formatting code into it, allowing it to basically display anything. Is that correct?
If so, the trickiness for an Old like me is then having to learn formatting as well. I can and will do that in time, but I wonder if the node itself wouldn't benefit from having some default presets in a dropdown? I imagine it'd speed up adoption something radical, as well as demonstrating the versatility of the node.
That is a great suggestion. I love it. It's not without hurdles mind (like where are the examples stored) but it's something I think we should explore. @joepavitt
I will create a new topic in "Share your projects" once I've finished the README.md on Github, I don't like "markdown" in general and the GitHub editor in particular.
Not a lot of changes in my node-red, mostly I just had to add an MQTT message to terminate the Python code, as somehow a recent Ubuntu upgrade has broken the pkill -2 -f "AI2.py" command I use from an exec node for normal exit, although pkill -9 -f "AI2.py still seems to work. I took the easy way out with MQTT rather than trouble shoot Ubuntu.
And I got some great help here to add dynamic labels to the dropdown that selects the camera for live viewing.