I would like to connect a mic and speaker to my RaspberryPi and be able to have a live chat with the other person on their cellphone connecting to my dashboard live page.... Anyone know which nodes I should use to accomplish this?
An almost complete list of nodes is available at flows.nodered.org where you can search for their capabilities
Hi ukmoose, I have went through those but often enough they do not come with examples so it does not help.
U don't need Node-RED for this. Just install Skype etc etc
Hi Nassim (@nfarfhat),
We have tried to do something similar in this discussion some time ago. Problem is that the dashboard's audio-out node can only play a single audio buffer (of finite length), for example an mp3 file. What we wanted to do is play an infinite audio stream, like a microphone.
However web audio (which is used by the audio-out node) currently doesn't support (infinite) audio streaming. So I have done a large series of experiments to workaround that. At the end I had a solution that worked fine on desktop browsers: we had good quality audio streaming in the Node-RED dashboard. But unfortunately on mobile browsers (like Chrome on Android) there was a lot of annoying distortions between the audio chunks. Couldn't get rid of that, so I gave up at the end.
Remark: a few days ago I have published a beta version of my node-red-contrib-wav-beta node, which I had originally developed for all above experiments. Saw last week by accident that this node contained a bug: the length of the chunks was incorrectly calculated, so 'perhaps' that was the reason why the audio chunks were not nicely appended by my Chrome on Android (which will cause distortions). But that is only a wild guess, and I have no time at the moment to test that theory ...
Thank you so much Bart. This is a really interesting node, can't wait to try it out.
You think its possible to uses it in a bi-directional way, ex:
Dir 1: using the raspberry Pi mic running node red server, acquire sound and send it to the UI dashboard on the other side (mobile or desktop)
Dir2: using the mic on the device accessing the UI Dashboard, send sound stream to RaspberryPi where node red server lies and output to speaker.
Think you should start with searching for existing information on this forum. But then it will still be a challenge, because I don't think anyone has ever created a full setup. If I just could buy somewhere some free time ...
You can find my experiment here to capture audio from an USB microphone on Raspberry. Summarized:
- Use the Microphone node from the node-red-contrib-micropi suite. In that node you have to specify the number of channels, bits per sample ... And then this node injects chunks of raw audio samples into Node-RED.
- You can send those audio chunks across the wires in your Node-RED flow.
- As soon as you tell some node what those audio samples represent (since they only see a huge chunks of bytes), you can use my node-red-contrib-wav node which adds a header to the raw audio. In my node you again have to specify the same configuration (number of channels ...) as you have entered in the micropi node above.
- If you send those wav chunks to the audio-out node (i.e. to the dashboard), that won't work since the audio-out node doesn't support streaming. As I already explained in my above post.
That is another issue, which I haven't tried yet. But from this discussion you see that this is again a black hole in the dashboard.
The WEB audio API is quite a hell to get running smooth. The smallest gap in the audio results in very annoying distortions. But at the moment the MediaSource API is supported by all major browsers.
And that API has all kind of audio streaming stuff out-of-the-box [available]. It would be great to have a node-red-contrib-ui-mediasource node to offer this kind of stuff in the dashboard. Unless somebody volunteers to build such a node, you will have until I have time. But I have already promised lots of stuff to other users, and I can't keep them waiting ...
[EDIT] : even when you have reached a very small success in your flows, please post here what you have achieved so we all can learn from it ..
[EDIT] : in this article you can see what I mean. When you have a gap in your audio (i.e. no samples because your next sample arrived a little bit too late in your browser), then you will get a small period of 'silence'. But the transition from sound to silence is to steep (which you will hear very well!):
But the MediaSource API will make those transitions smoother automatically ...
Thank you Bart, you are a scholar and a gentleman.
I will definitely post my progression in this thread when I do get to work on it. It's a long term goal as I have a 1 year old and 2 other bigger kids to take care of. So whatever free time after 8pm I can find I use it .
My project is an interesting one. I developed a few pieces so far. I designed a UI dashboard controlling a RaspberryPi connected to a robot on wheels. On the UI dashboard, I offer 4 direction arrows to control the direction of the robot and a live streaming video feed from my Pi camera. My motors are hooked to optical encoders so I can control both motors in a feedback gain loop to obtain an exact desired speed using an H-bridge PWM signal. I can also know their location at any given time.
All I'm missing is the bidirectional voice communication between the robot and the Dashboard UI controller.
The idea is to have some of my family members that are far from here be able to control the robot at distance and speak to us as if they were here themselves, video and audio and motion all in one giving perfect freedom to move around the house as if they were there themselves here as a robot proxy.
I also would like to use it myself as a home monitoring tool while I'm away on vacation and eventually build it a charging docking station and guide it myself in there everytime I leave the dashboard. This could eventually be automatic with the capture of encoder ticks, I could know at all times where the robot is and have a button to make it return home at anytime.
There should also be an autonomous mode l would let it browse the house anywhere and gather info into an unsupervised Neural Network through reinforced learning technics. Basically a smart AI robot. There are specialized AI cameras that take care of much of the AI through hardware, like facial tracking or smile tracking... This would be the evolution of my Pi camera and of the robot itself.
I also have the ability to add emotions to it through a mini LCD screen where I can control smiles and such....
As I said... A very very long term plan considering I'm so busy... But keeps me active and for sure, when I get any breakthrough, I'll post for all.
Thank you and we'll stay in touch.
Hi Nassim (@nfarfhat),
Was very curious whether the MediaSource API could be of any help to us in this use case. So I have quickly implemented a beta version of a new contribution node-red-contrib-ui-media-source.
You can install it like this:
Currently this node suite only contains a single node, the Recorder node. This node allows you to capture audio from your microphone in the Node-RED dashboard. I have tested this on Windows via Chrome, Edge and Firefox.
So now it is up to you for some test work, while I do other things... You will end up with lots of data in Node-RED: indeed you get a massive amount of data chunks, which are buffers containing the raw audio. I have no time left to experiment with the raw audio samples itself in Node-RED. But I assume you could record N seconds and append all the chunks into a file. And then try to play the file somehow (e.g. with some third-party tool), so we know at least if it is real audio...
Have fun with it !!
A lot of information to go through. Will let you guys know if any advancement.
I thank you for your time
It is a bit early to test recording audio from your microphone via the Node-RED dashboard ...
I have added some extra options to the Recorder node:
So now you can choose the audio encoding:
- Raw audio samples
The RAW audio samples use a lot of bandwidth to transfer them from the dashboard to the Node-RED flow. So you can now convert them to MP3 chunks in the dashboard (on the client side), before you send them across the network ...
And I have added some basic audio manipulation:
- Echo cancellation
- Noise suppression
- Automatic gain control
At the start I had a visual button in my Recoder UI widget, but I have removed that: my widget currently has NO visual components. This way everybody can choose which other (of the shelf available dashboard widgets) he want to use to enable/disable the microphone recording. For example I use a normal Switch (that sends just a
false to my Recorder node) to toggle the recording:
Wow Bart that's amazing. I was still waiting for the mic to come from banggood so there's no rush. I'm really happy you added the ambiant noise cancellation and compression that's amazing. This seems to shape up quite nicely.
So let me understand. The node you created finds the mic on the dashboard client side, compresses it to mp3 at a certain bitrate and transmits it to the server side... I'm guessing this is still early since the transferred data is still raw maybe?
How would you get it to capture on the raspberry server side and send it to the speaker. Maybe we need another node to capture the raw data wrapped in a header that could be unwrapped, decompressed and shot out to the speaker... From the earlier post I think you mentioned that this already exists.
Did you update your latest recorder node to Git?
It's because I'm not seeing the options you mentioned about the buffer length, encoder, automatic options etc..
I also tried the sample that came with the recorder node (tested both on desktop and mobile UI by speaking into the speaker of my mobile phone or laptop mic)
and have not been able to see message outputted into the debug window of the red node debugger dashboard.
If there were message shown on the debug window though, could I have outputted them to the speaker of the raspberry Pi using the "audio out" node but not directed to the client speaker but to the server speakers on the raspberry side?
Ah, could you please share a link. I ordered one from Aliexpress but it was awfull quality...
That is correct. I will add your summary on the readme page.
Because I'm sending raw audio or mp3 to the server, but for both encodings I haven't been able to test yet whether the data contains (good) audio. Perhaps the data is useless. About the mp3: I only know it is compressed, because the size is about 4x smaller ...
If you mean the speaker of the machine running the dashboard, then the answer is: no this doesn't exist yet. I tried saturday to create a player node (also based on MediaSource like the Recoder node), but it seemed that MediaSource doesn't support raw audio. Very weird. So that was my trigger to switch to mp3 ... Would have to do an end-to-end test between two dashboards if I have time ...
It is on Github, but not on NPM yet. You can install it from Github like this:
npm install bartbutenaers/node-red-contrib-ui-media-source
The mobile could be an issue, because I haven't tested it yet on mobile. And the mediasource API is very platform dependent. But also on desktop? I see your first node has status 'OFF'. Have you switched microphone switch ON, on your dashboard? On Chrome I got a popup to allow the application to access the microphone ...
The following is the link to the mic I ordered:
I have no idea if the quality is great or not, it hasn't even been shipped yet and it's been 3 days so I'm starting to become a little skeptic here. In any case, I only saw your response (about the aliexpress one your received) only after buying it myself, it might be of equivalent quality.
I meant in the Direction from the client (desktop UI application) to the server where the raspberry runs node-red. I think I should expand your example and add an audio node after your recorder node:
This would take the mp3 samples collected by the recorder node on the client device (mobile or desktop) decompress the mp3 into WAV and shot it to the raspberry pi speaker. But I was just curious to how exactly does node-red server make the disctinction between (the speaker on the client side and the speaker on the server side(itself) )?
thanks I'll try that out, I originally installed it as described on your GitHub page: npm install email@example.com
The OFF is simply the snapshot state i took it in, but yes I did turn the switch ON lol! But good remark... you never know In any case, my test consisted of speaking through my laptop camera speaker on the client side... (but this is where I think the recorder did not sample from that speaker) to the server side, where I had my raspberry connected through HDMI to the TV. I expected sound to come out of the TV. But I see now that there are 2 issues.
1- I should wait for my usb speaker to arrive from china and talk into it (on the client side) and find out if the debugger at least can output some messages while I talk.
2- Next I should find a way to decompress the mp3 stream and output it to the server speaker (HDMI or USB.. to be seen).
Will be testing some more when I get free time.
Once more... Thanks for everything Bart.
Thanks for the link! It looks 100% to mine, which was awfully bad. I had to scream into it, to get a very weak sound signal But if you got it working somehow, please share how you managed to do it...
If that works fine, I will eat up my shoes...
Here you can find the explanation, if you are interested in some evidence.
So since web audio (behind the Audio-out node) doesn't support audio streaming (i.e. it can only play a single mp3 for example), you can not do it like this:
Unless we could find why my streaming implementation for the Audio-out node (see the same discussion in the above link) has nasty distortions between the audio chunks on Chrome mobile.
Perhaps the MediaSource API (which I also use for my recorder node) offers better audio streaming capabilities (compared to Web audio). I have already created a Player-node (based on MediaSource), but I haven't tested it yet. But then we could do something like this:
That is weird. When your dashboard is displayed on your laptop, and you switch the recorder button to ON (via that same dashboard) : then the audio chunks should arrive in your Node-RED flow. Which browser are you using??
Hi Bart, I could finally come back to this. The hooked the recorder node to send some debug chucks of data when hooked to the debug node but I get nothing. I do have a mic configured PROPERLY on my win10 machine that comes with the webcam on top of the monitor screen (as all machines do) But no data sound chunks coming out... I did connect the debug node to the enable microphone switch and I got a true and false payload so I know that the recorder node is receiving the trigger... but what it does with it... I don't know.. at what interval is shoots a data payload of data sound ... I don't know...
I am using chrome to accomplish this.
Apologies for the delay, but I have lost lots of spare time last weeks due to my daily job...
Not sure about that. Will need to investigate that further in the near future.
I will need to add some extra logging ...
Have done some updates on Github, but not sure if that is going to solve that issue.
I still don't have any sound in my Speaker node
So for troubleshooting I have added a canvas to the Microphone-node. Now you can at least display the recorded waveform in your dashboard, as soon as you speak into your microphone:
So now at least we know this part is already working...
I wanted to add an indentical waveform visualizer to the Speaker-node, but no clue at the moment how to do that. Because it is based on the Media Source Extensions API, and I don't find any link to the Web audio API
If I could implement that somehow, we could compare if the microphone waveform from dashboard 1 arrived identical in the speaker on dashboard 2.