Hi
I am quite new to Node-RED. I am a Code Club volunteer, with an interest in Making and Computational Thinking.
I am working on a project to introduce Node-RED to Code Club audiences (9 - 13 years old). I have created a physical enclosure containing a Raspberry Pi, webcam/microphone, Bluetooth speaker, LED lights, a couple of buttons and a servo motor. I am using Node-RED as the platform, with API integrations such as Watson.
The idea is that users can explore physical computing, IoT and AI using the console. However, once I built the hardware, I realised that my workshop participants are going to interact with the console using a Node-RED web interface on their own laptops. There are lots of good motivations for this, with the most important being how slow and frustrating the Node-RED web interface is in Chromium directly on the Raspberry Pi.
The consequences only became apparent to me after I started the coding. Of course, the Node-RED dashboard and flows in general use the laptop webcam, microphone and speakers, not those that I spent hours integrating into my enclosure!
So far, I haven't found a single node or dashboard UI element that will use the Raspberry Pi webcam, microphone and speakers when controlled via the Node-RED web interface on a laptop. They all only use the laptop hardware - I guess I understand the logic of this, maybe... no, actually, not really...err...
I'm gutted, because I really wanted the console to stand alone as a complete "thing", including both the data inputs and outputs.
I'd appreciate inputs from the community to either correct any false views I have, or help me to reconceptualise my project, or provide code that allows me to happily proceed as I wish (control via laptop, with input and output via Pi).
And everything has to work with Watson (which I haven't even started integrating yet).
Thanks
Brendon