Play Sound from USB Microphone

Hi Glenn,

I have 'very' few time at the moment, but would like to experiment with the audio in Node-RED in a couple of weeks. Would be nice if you could give some extra information about what you have currently achieved (and please export you flow and share it here).

  • I also don't have a microphone at my Raspberry. Which one are you using?

  • Had a quick view at the node-red-contrib-micropi and I see that the support multiple nodes (?):
    image

    Can you also use the MicroPi node?

  • Seems both nodes return a raw audio stream. I assume that is a stream of infinite length?

  • You say "it doesn't play the recorded piece until I press stop". Do I interpret this correctly: the microphone node doesn't send any data on the wire (to the audio out node), until you stop the stream. Then it sends a wave file containing the whole recorded audio. Is that correct? Or does it sends continiously data to the audio out node, which buffers the data and starts playing only after you stop the stream? Summarized: which of both nodes buffers the data?

  • Can you give some more info about the data on the wire ?

  • ...

Thank!
Bart

Evening Dave,

Had just a very quick look at both nodes used by Glenn. By looking at the code, I would expect that streaming is already implemented everywhere? Would be nice if you could indicate in what area you would expect there is a problem. I have no microphone available, so cannot see what happens ...

  1. The node-red-contrib-micropi node sends data chunks on it's output:

    audioStream.on('data', (data) => {
          if(!audioStream.isSilenced()) {
                 node.send({status: 'data', payload: data});
          }
    });
    

    Unless there is a bug here, I would expect seeing here a continuous stream of data chunks over the wire...

  2. The server-side Audio-Out node receives those buffers, and sends them to the client-side:

    this.on('input', function(msg) {
         if (Buffer.isBuffer(msg.payload)) {
             ui.emit('ui-audio', { audio:msg.payload, tabname:node.tabname, always:node.always });
         }
    
  3. The client-side Audio-Out node receives this buffer, decodes it an plays it:

    events.on('ui-audio', function(msg) {
        ...
        audiocontext = audiocontext || new AudioContext();
        var source = audiocontext.createBufferSource();
        var buffer = new Uint8Array(msg.audio);
        audiocontext.decodeAudioData(buffer.buffer, function(buffer) {
        source.buffer = buffer;
        source.connect(audiocontext.destination);
        source.start(0);
    

Thanks !
Bart

Must admit I don’t know. Don’t have a mic either so not tried that node. May just be a config problem he has.

No problem. I have ordered a microphone for raspberry from Aliexpress. That allows me to do an end-to-end test, and debug the whole scenario...

Two reasons why I bought it on Aliexpress:

  1. Good price.
  2. But more important: They deliver only in 3 weeks, so then at least I have some extra time to finish the blockly node first :wink:

Hi Bart,

Apologies I have parked this for a while.

Flow is below.

Basically if you hit record it sends to a file and then serves up when you hit playback. It's the best I have achieved to date. The top node output (live raw data) does nothing.

I use the webcam microphone and I have also tried https://www.amazon.co.uk/gp/product/B0757JT9S7/ref=oh_aui_search_detailpage?ie=UTF8&psc=1

[{"id":"38f4fce9.a7a8c4","type":"microPi","z":"3604bb55.40a244","name":"microPi","filename":"/home/pi/audio/sound.wav","domain":"http://localhost:8989/getAudio","rate":"16000","bitwidth":"16","endian":"little","encoding":"signed-integer","channels":"1","silence":"5","debug":"true","mode":"666","x":380,"y":140,"wires":[[],["b83ead5b.13deb"],[]]},{"id":"14b5ec.4925ca15","type":"inject","z":"3604bb55.40a244","name":"record","topic":"","payload":"true","payloadType":"bool","repeat":"","crontab":"","once":false,"onceDelay":"","x":110,"y":120,"wires":[["38f4fce9.a7a8c4"]]},{"id":"7fb8dc82.8def04","type":"inject","z":"3604bb55.40a244","name":"stop","topic":"","payload":"false","payloadType":"bool","repeat":"","crontab":"","once":false,"x":110,"y":160,"wires":[["38f4fce9.a7a8c4"]]},{"id":"eb48aa20.6a6b28","type":"ui_template","z":"3604bb55.40a244","group":"f19ce9d9.8e1b38","name":"record","order":28,"width":"1","height":"1","format":"\n<md-button class=\"vibrate filled touched smallfont rounded\" style=\"background-color:#34495e\" ng-click=\"send({payload: 'Hello World'})\"> \n Start\n</md-button> \n\n","storeOutMessages":true,"fwdInMessages":true,"templateScope":"local","x":90,"y":80,"wires":[["7323cdff.622174"]]},{"id":"7323cdff.622174","type":"change","z":"3604bb55.40a244","name":"","rules":[{"t":"set","p":"payload","pt":"msg","to":"true","tot":"bool"}],"action":"","property":"","from":"","to":"","reg":false,"x":240,"y":80,"wires":[["38f4fce9.a7a8c4"]]},{"id":"aaf3e9b6.1d3cd8","type":"ui_template","z":"3604bb55.40a244","group":"f19ce9d9.8e1b38","name":"stop","order":28,"width":"1","height":"1","format":"\n<md-button class=\"vibrate filled touched smallfont rounded\" style=\"background-color:#34495e\" ng-click=\"send({payload: 'Hello World'})\"> \n Stop\n</md-button> \n\n","storeOutMessages":true,"fwdInMessages":true,"templateScope":"local","x":90,"y":200,"wires":[["2dda1822.24cb58"]]},{"id":"2dda1822.24cb58","type":"change","z":"3604bb55.40a244","name":"","rules":[{"t":"set","p":"payload","pt":"msg","to":"false","tot":"bool"}],"action":"","property":"","from":"","to":"","reg":false,"x":240,"y":200,"wires":[["38f4fce9.a7a8c4"]]},{"id":"b83ead5b.13deb","type":"change","z":"3604bb55.40a244","name":"","rules":[{"t":"change","p":"payload","pt":"msg","from":"metadata","fromt":"re","to":"","tot":"str"}],"action":"","property":"","from":"","to":"","reg":false,"x":560,"y":140,"wires":[["3311e37f.8bbb2c"]]},{"id":"3311e37f.8bbb2c","type":"ui_audio","z":"3604bb55.40a244","name":"","group":"f19ce9d9.8e1b38","voice":"","always":true,"x":740,"y":140,"wires":[]},{"id":"f19ce9d9.8e1b38","type":"ui_group","z":"","name":"Devices","tab":"54f69592.154adc","order":2,"disp":false,"width":"6","collapse":false},{"id":"54f69592.154adc","type":"ui_tab","z":"","name":"Ratby Road","icon":"dashboard"}]

Hi @GChapo, @dceejay,

I have been doing a lot of experiments, and finally I have now a solution that seems to be working. Since these were my first steps into the world of audio signals, don't hesitate to let me know if something is not correct !

image

[{"id":"2852077b.af8aa8","type":"microphone","z":"279b8956.27dfe6","name":"microphone","endian":"little","bitwidth":"16","encoding":"signed-integer","channels":"1","rate":"22050","silence":"60","debug":false,"active":true,"x":950,"y":1220,"wires":[["c61c107a.970a2"]]},{"id":"c61c107a.970a2","type":"wav-headers","z":"279b8956.27dfe6","name":"","channels":1,"samplerate":22050,"bitwidth":16,"x":1136,"y":1220,"wires":[["7597ba6e.5adbb4"]]},{"id":"7597ba6e.5adbb4","type":"ui_audio","z":"279b8956.27dfe6","name":"","group":"180d570c.93b059","voice":"0","always":false,"x":1320,"y":1220,"wires":[]},{"id":"5482134.a222dec","type":"inject","z":"279b8956.27dfe6","name":"record","topic":"","payload":"true","payloadType":"bool","repeat":"","crontab":"","once":false,"onceDelay":"","x":770,"y":1200,"wires":[["2852077b.af8aa8"]]},{"id":"363b5caf.bf44c4","type":"inject","z":"279b8956.27dfe6","name":"stop","topic":"","payload":"false","payloadType":"bool","repeat":"","crontab":"","once":false,"x":770,"y":1240,"wires":[["2852077b.af8aa8"]]},{"id":"180d570c.93b059","type":"ui_group","z":"","name":"Devices","tab":"493cf398.76af9c","order":2,"disp":false,"width":"6"},{"id":"493cf398.76af9c","type":"ui_tab","z":"","name":"Ratby Road","icon":"dashboard"}]

Have lost two evenings figuring out why I got no sound at all. At the end I read some review of my USB microphone: I had to scream loud at 2 inch distance from my USB microphone, to be able to hear something. So I wouldn't suggest anybody to buy such a microphone :rage:

  1. I use the 'microphone' node (from node-red-contrib-micropi) to get audio samples from the USB microphone on my Raspberry. That node gives a stream of data chunks containing raw audio samples (PCM). I don't use the Micropi node (from the same node set), since that node - like Glenn already discovered - only starts playing data when you send a stop message!

    Remark: those nodes keep on streaming audio when you (re)deploy your flow, which is annoying since you will end up with a mix of multiple streams. I created a pull request to stop the streams automatically at deploys.

  2. However when you send such raw audio chunks to audio-out node in the dashboard, the browser cannot decode the raw audio samples. I just doesn't know what all the bytes mean, so I had to develop the node-red-contrib-wav-headers, which adds WAV headers to the raw audio data.

    Remark: I haven't published this node on NPM yet, so you can install it currently like this:
    npm install bartbutenaers/node-red-contrib-wav-headers

    Remark: the readme file of this node contains an introduction to PCM, WAV, ... to get you started.

    Remark: the disadvantage is that you have to enter the same settings in both the microphone node and the wav node. Would have been better if the microphone node would be able to produce a WAV stream....

  3. We now have WAV audio chunks which can be decoded by the browser. But the audio-out node currently can only play a single audio file:

    try {
         audiocontext = audiocontext || new AudioContext();
         var source = audiocontext.createBufferSource();
         var buffer = new Uint8Array(msg.audio);
         audiocontext.decodeAudioData(buffer.buffer, function(buffer) {
              source.buffer = buffer;
              source.connect(audiocontext.destination);
              source.start(0);
         })
    }
    catch(e) { alert("Error playing audio: "+e); }
    

    Indeed the decodeAudioData seems not to be designed for chunked audio stream, i.e. you need to feed it with a complete audio file of finite length. But I found a workaround, and I changed the dashboard code like this:

    try {
         audiocontext = audiocontext || new AudioContext();
         var source = audiocontext.createBufferSource();
         var buffer = new Uint8Array(msg.audio);
         audiocontext.decodeAudioData(buffer.buffer, function(audioBuffer) {
               audioStack.push(audioBuffer);
               while ( audioStack.length) {
                    var chunkBuffer = audioStack.shift();
                    var source = audiocontext.createBufferSource();
                    source.buffer = chunkBuffer;
                    source.connect(audiocontext.destination);
                    if (nextTime == 0) {
                          nextTime = audiocontext.currentTime + 0.01;  
                    }
                    source.start(nextTime);
                    // Make the next buffer wait the length of the last buffer before being played
                    nextTime += source.buffer.duration; 
               }
          }, function(e) {
               console.log("Error decoding audio: "+e);
          });
    }
    catch(e) { console.log("Error playing audio: "+e); }
    

    Remark: the audioStack variable is declared here:

    app.controller('MainController', ['$mdSidenav', '$window', 'UiEvents', '$location', '$document', '$mdToast', '$mdDialog', '$rootScope', '$sce', '$timeout', '$scope',
     function ($mdSidenav, $window, events, $location, $document, $mdToast, $mdDialog, $rootScope, $sce, $timeout, $scope) {
         var audioStack = [];
    

    Remark: I had to change the 'alert' in the catch to 'console.log', otherwise - in case the stream cannot be decoded - an error popup would be displayed for every chunk!

    Remark: the inventor of this mechanism added a 50ms latency 'to work well across systems'. I assume a constant value is fine, but otherwise a number input field should be added to the audio-out node's config screen?

This was only a proof of concept, but - due to a lack of time - there are still some TODO's:

  • Some extra tests would need to be done. Would be nice if somebody else could do this ... E.g. I did a basic test whether the audio-out node can still play an audio file like in the original version, but perhaps other tests are required.
  • I haven't done any performance tests yet.
  • Some extra nodes should be developed, e.g. to convert the PCM chunks to MP3 chunks. Because currently a lot of data is transferred to the browser !! I have already developed some stuff, but I have not enough time to finish it all in a short period of time ...
  • And the most difficult challenge: convince Dave to update the audio-out node. Still have to figure out how to accomplish that :thinking:

Shoot !!!
Bart

2 Likes

Well as long as it doesn’t break existing users then happy to look at and try a PR.

Thanks guys,

I'll do some tests and come back to you as soon as I can.

Glenn

Would a USB webcam work instead? I have one or two of those lying around & I've meant to try them out, just not got round to it.

Yes a USB webcam will work.

1 Like

Hi Bart,

Struggling to load wav headers as shown. Please advise?

Glenn

Hi Bart,

All good now and loaded.

Great work! The sound plays through a USB Camera. I have yet to test with the separate microphone and I am yet to test latency.

My comment so far is that it sounds a little jagged at times as if bits of data are being missed. I have also had to reboot once as it stopped completely, but that maybe a separate issue I'm not sure. I will keep running trials and I will report back.

Thanks again

Glenn

Glenn,
You are welcome!

Would be nice if you could do some tests, because of I lots of other Node-red developments on my todo list.

That was something I was worried about. Will need to investigate it...

Do you think the wav-headers node is good enough to publish on NPM?

And another question. How have you implemented my change of the dashboard audio-out node?

I have made no implementation change. All I have done is load the wav node in, import your flow, stick two dashboard buttons on for playback and stop and deployed

Ah that might explain the missing samples. In the discussion you will see that I changed the dashboard code. Currently it will play one sample at a time. In my code the chunks with samples are continiously appending samples. I will do a pull request this evening, so Dave can load my changes. Hopefully it works smoother then for you.

Look forward to it. Thanks

Hi Dave (@dceejay),
I would like to pass my pimped dashboard version to Glenn, so he can test whether the audio is streamed smoothly with my fix. Once all tests are completed, I will create a pull request.
What is the easiest way to do this? Can I pass him the content of my /dist folder?

Thanks !
Bart

that should do it - though I'm happy to look at PR as soon as you are ready.

1 Like