Need help thinking about my project


I am quite new to Node-RED. I am a Code Club volunteer, with an interest in Making and Computational Thinking.

I am working on a project to introduce Node-RED to Code Club audiences (9 - 13 years old). I have created a physical enclosure containing a Raspberry Pi, webcam/microphone, Bluetooth speaker, LED lights, a couple of buttons and a servo motor. I am using Node-RED as the platform, with API integrations such as Watson.

The idea is that users can explore physical computing, IoT and AI using the console. However, once I built the hardware, I realised that my workshop participants are going to interact with the console using a Node-RED web interface on their own laptops. There are lots of good motivations for this, with the most important being how slow and frustrating the Node-RED web interface is in Chromium directly on the Raspberry Pi.

The consequences only became apparent to me after I started the coding. Of course, the Node-RED dashboard and flows in general use the laptop webcam, microphone and speakers, not those that I spent hours integrating into my enclosure!

So far, I haven't found a single node or dashboard UI element that will use the Raspberry Pi webcam, microphone and speakers when controlled via the Node-RED web interface on a laptop. They all only use the laptop hardware - I guess I understand the logic of this, maybe... no, actually, not really...err...

I'm gutted, because I really wanted the console to stand alone as a complete "thing", including both the data inputs and outputs.

I'd appreciate inputs from the community to either correct any false views I have, or help me to reconceptualise my project, or provide code that allows me to happily proceed as I wish (control via laptop, with input and output via Pi).

And everything has to work with Watson (which I haven't even started integrating yet).


You are finding the limitations of the Pi as a platform there. It simply doesn't have enough RAM (unless you are using a Pi 4 with more RAM) to act both as a server and a desktop client at the same time. Though doubtless you could optimise things quite a bit by changing to a simpler desktop environment and stripping out everything that isn't needed.

One of the biggest pitfalls with the Pi is if it starts using SWAP. That's because SWAP is disk-based of course and using the SD-Card is extremely slow.

In terms of using a laptop as the interface, I believe that there are plenty of nodes that will work with the laptop's webcam and microphone. But remember, that is client-side not server-side. The most commonly used client-side service for Node-RED is the Dashboard. So the nodes you are looking for are Dashboard nodes. They extend the use of the Dashboard.

Thanks for responding. I am clear on the RPi limitations, and the client-side nature of the cam and microphone nodes.

I suspect my desire to control from the laptop (client side) but do input and output from the Raspberry Pi (server side) is too unusual to be supported?

If so, then my build can be simplified a lot - no need for webcam, microphone and speaker on the Pi?


Can I make sure I understand what you mean here? Do you mean you want to use the laptop running the browser as the HMI, but you want to use the peripherals (microphone, speakers etc) on the pi? I am surprised there are not nodes for that.
In fact looking on the flows site the first one I found when searching for microphone is node-red-contrib-mic which looks to me as if it is using the server side mic. Though it is ancient and may no longer work.

Or this flow to use the Pi camera -
or these nodes or
etc - several more and example flows on

Yes, that is exactly what I want to achieve.
I'll take a look at the links you and the other posters have supplied.



I have image capture working as intended (thanks to the links provided).
On the laptop, I press the inject node, an image is captured from the webcam attached to the Raspberry Pi and then displayed on the dashboard.

The details are:

  1. Install node-red-contrib-rpi-imagecapture
  2. Install node-red-node-base64
  3. Install the fswebcam package
  4. Create a fswebcam config file at /home/pi/fswebcam.conf
  5. Insert and adjust the config settings from the example found in the node-red-contrib-rpi-imagecapture page
  6. Apply the code fix

Create a flow based on the one listed above (ending in 0d40c), replacing raspistill (which only works with the Raspberry Pi camera module) with imagecapture (which works with a USB webcam)

Now I need to work on flows for video capture, audio capture, and playback!


1 Like

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.