sorry - I don't have a USB mic so can't help at that end of the pipe
I expect that is the same way that the system knows to use a wifi adaptor for wifi whichever port you plug it in. It says, oh here is a wifi adaptor, I will use it for wifi. I assume that in the case of a mic it does a similar thing.
If you run, in a terminal
tail -f /var/log/syslog
and plug the mic in you should see it recognising it as a mic.
I am still not sure what is working and what isn't, are you able to record from the mic to a file? I suggest get that working, then get playing the file working, then you should be able to take the file out and pass it straight through I believe.
Which version of node-red and node.js are you using? Restart node-red and then, in a terminal, run node-red-log and copy/paste here what you see from where node red starts.
Thanks Colin,
I'll park it for a couple of days and run all the mic tests as you suggest. It seems to be that if it's secure on setup then the node red bit should be fairly straight forward. If I have problems I'll come back to you.
In the meantime thank you all very much for your assistance.
I'm still really struggling with this. I have played the demo.wav file successfully several times but for the life of me I cannot serve it up to the dashboard ui on my phone. I have the following configuration;
I've not used the microPi node so just guessing by the words you have on the debug... I would guess you need to take the buffer file part of the second output and feed that into the audio out... so you would need to pass through a change node to move the buffer part only to the payload (ie removing the metadata parts)
Hi dceejay,
Thank-you for your assistance.
On your recommendation I am now able to receive sound through the dashboard UI, however as I have a record and stop, it doesn't play the recorded piece until I press stop. I am unsure how to buffer once I hit record and stop sound when I hit stop?
Thanks again
Glenn
Is it possible to play live audio stream in node red? I now can record and playback but don't seem to be able to play live?
Hi Glenn,
I have also been asking a rather similar question in the past, on the old forum. My never got an answer... In my case the audio was a stream arriving from an IP camera, but I assume my audio fragements should travel through the Node-RED flow the same way as your audio?
So if somebody can point us in the right direction, that would be very appreciated !!
Bart
Node-RED by default works on chunk / packets of data. There are a few nodes that handle streams but most don't. No reason that more couldn't be made to work with streams - including the audio out one - but that's just the way they are right now. I look forwards to your pull request
Thanks Bart & Dceejay. Yes I did follow your previous dead end search Bart, so thank you for confirming my suspicions.
I have put a feature request in and look forward to its deployment.
Glenn
For the moment a way to achieve it would be to work out how to do it from the command line then run that from an exec node.
Hi Colin,
that would be indeed enough if somebody could let us know how do it with third party stuff. Then at least I would know in which direction I have to search to integrate it better in Node-RED afterwards.
But I'm a complete noob regarding to Audio I assume we should start with some kind of segmentation/fragmentation ... (e.g. with ffmpeg ???). So all feedback from anybody is welcome. Don't even know at the moment for which keywords I have to google.
Bart
I don't know either but I would start by googling for
raspberry pi stream audio from microphone
That seems to give a good number of hits.
Hi Glenn,
I have 'very' few time at the moment, but would like to experiment with the audio in Node-RED in a couple of weeks. Would be nice if you could give some extra information about what you have currently achieved (and please export you flow and share it here).
-
I also don't have a microphone at my Raspberry. Which one are you using?
-
Had a quick view at the node-red-contrib-micropi and I see that the support multiple nodes (?):
Can you also use the MicroPi node?
-
Seems both nodes return a raw audio stream. I assume that is a stream of infinite length?
-
You say "it doesn't play the recorded piece until I press stop". Do I interpret this correctly: the microphone node doesn't send any data on the wire (to the audio out node), until you stop the stream. Then it sends a wave file containing the whole recorded audio. Is that correct? Or does it sends continiously data to the audio out node, which buffers the data and starts playing only after you stop the stream? Summarized: which of both nodes buffers the data?
-
Can you give some more info about the data on the wire ?
-
...
Thank!
Bart
Evening Dave,
Had just a very quick look at both nodes used by Glenn. By looking at the code, I would expect that streaming is already implemented everywhere? Would be nice if you could indicate in what area you would expect there is a problem. I have no microphone available, so cannot see what happens ...
-
The node-red-contrib-micropi node sends data chunks on it's output:
audioStream.on('data', (data) => { if(!audioStream.isSilenced()) { node.send({status: 'data', payload: data}); } });
Unless there is a bug here, I would expect seeing here a continuous stream of data chunks over the wire...
-
The server-side Audio-Out node receives those buffers, and sends them to the client-side:
this.on('input', function(msg) { if (Buffer.isBuffer(msg.payload)) { ui.emit('ui-audio', { audio:msg.payload, tabname:node.tabname, always:node.always }); }
-
The client-side Audio-Out node receives this buffer, decodes it an plays it:
events.on('ui-audio', function(msg) { ... audiocontext = audiocontext || new AudioContext(); var source = audiocontext.createBufferSource(); var buffer = new Uint8Array(msg.audio); audiocontext.decodeAudioData(buffer.buffer, function(buffer) { source.buffer = buffer; source.connect(audiocontext.destination); source.start(0);
Thanks !
Bart
Must admit I donāt know. Donāt have a mic either so not tried that node. May just be a config problem he has.
No problem. I have ordered a microphone for raspberry from Aliexpress. That allows me to do an end-to-end test, and debug the whole scenario...
Two reasons why I bought it on Aliexpress:
- Good price.
- But more important: They deliver only in 3 weeks, so then at least I have some extra time to finish the blockly node first
Hi Bart,
Apologies I have parked this for a while.
Flow is below.
Basically if you hit record it sends to a file and then serves up when you hit playback. It's the best I have achieved to date. The top node output (live raw data) does nothing.
I use the webcam microphone and I have also tried https://www.amazon.co.uk/gp/product/B0757JT9S7/ref=oh_aui_search_detailpage?ie=UTF8&psc=1
[{"id":"38f4fce9.a7a8c4","type":"microPi","z":"3604bb55.40a244","name":"microPi","filename":"/home/pi/audio/sound.wav","domain":"http://localhost:8989/getAudio","rate":"16000","bitwidth":"16","endian":"little","encoding":"signed-integer","channels":"1","silence":"5","debug":"true","mode":"666","x":380,"y":140,"wires":[[],["b83ead5b.13deb"],[]]},{"id":"14b5ec.4925ca15","type":"inject","z":"3604bb55.40a244","name":"record","topic":"","payload":"true","payloadType":"bool","repeat":"","crontab":"","once":false,"onceDelay":"","x":110,"y":120,"wires":[["38f4fce9.a7a8c4"]]},{"id":"7fb8dc82.8def04","type":"inject","z":"3604bb55.40a244","name":"stop","topic":"","payload":"false","payloadType":"bool","repeat":"","crontab":"","once":false,"x":110,"y":160,"wires":[["38f4fce9.a7a8c4"]]},{"id":"eb48aa20.6a6b28","type":"ui_template","z":"3604bb55.40a244","group":"f19ce9d9.8e1b38","name":"record","order":28,"width":"1","height":"1","format":"\n<md-button class=\"vibrate filled touched smallfont rounded\" style=\"background-color:#34495e\" ng-click=\"send({payload: 'Hello World'})\"> \n Start\n</md-button> \n\n","storeOutMessages":true,"fwdInMessages":true,"templateScope":"local","x":90,"y":80,"wires":[["7323cdff.622174"]]},{"id":"7323cdff.622174","type":"change","z":"3604bb55.40a244","name":"","rules":[{"t":"set","p":"payload","pt":"msg","to":"true","tot":"bool"}],"action":"","property":"","from":"","to":"","reg":false,"x":240,"y":80,"wires":[["38f4fce9.a7a8c4"]]},{"id":"aaf3e9b6.1d3cd8","type":"ui_template","z":"3604bb55.40a244","group":"f19ce9d9.8e1b38","name":"stop","order":28,"width":"1","height":"1","format":"\n<md-button class=\"vibrate filled touched smallfont rounded\" style=\"background-color:#34495e\" ng-click=\"send({payload: 'Hello World'})\"> \n Stop\n</md-button> \n\n","storeOutMessages":true,"fwdInMessages":true,"templateScope":"local","x":90,"y":200,"wires":[["2dda1822.24cb58"]]},{"id":"2dda1822.24cb58","type":"change","z":"3604bb55.40a244","name":"","rules":[{"t":"set","p":"payload","pt":"msg","to":"false","tot":"bool"}],"action":"","property":"","from":"","to":"","reg":false,"x":240,"y":200,"wires":[["38f4fce9.a7a8c4"]]},{"id":"b83ead5b.13deb","type":"change","z":"3604bb55.40a244","name":"","rules":[{"t":"change","p":"payload","pt":"msg","from":"metadata","fromt":"re","to":"","tot":"str"}],"action":"","property":"","from":"","to":"","reg":false,"x":560,"y":140,"wires":[["3311e37f.8bbb2c"]]},{"id":"3311e37f.8bbb2c","type":"ui_audio","z":"3604bb55.40a244","name":"","group":"f19ce9d9.8e1b38","voice":"","always":true,"x":740,"y":140,"wires":[]},{"id":"f19ce9d9.8e1b38","type":"ui_group","z":"","name":"Devices","tab":"54f69592.154adc","order":2,"disp":false,"width":"6","collapse":false},{"id":"54f69592.154adc","type":"ui_tab","z":"","name":"Ratby Road","icon":"dashboard"}]
I have been doing a lot of experiments, and finally I have now a solution that seems to be working. Since these were my first steps into the world of audio signals, don't hesitate to let me know if something is not correct !
[{"id":"2852077b.af8aa8","type":"microphone","z":"279b8956.27dfe6","name":"microphone","endian":"little","bitwidth":"16","encoding":"signed-integer","channels":"1","rate":"22050","silence":"60","debug":false,"active":true,"x":950,"y":1220,"wires":[["c61c107a.970a2"]]},{"id":"c61c107a.970a2","type":"wav-headers","z":"279b8956.27dfe6","name":"","channels":1,"samplerate":22050,"bitwidth":16,"x":1136,"y":1220,"wires":[["7597ba6e.5adbb4"]]},{"id":"7597ba6e.5adbb4","type":"ui_audio","z":"279b8956.27dfe6","name":"","group":"180d570c.93b059","voice":"0","always":false,"x":1320,"y":1220,"wires":[]},{"id":"5482134.a222dec","type":"inject","z":"279b8956.27dfe6","name":"record","topic":"","payload":"true","payloadType":"bool","repeat":"","crontab":"","once":false,"onceDelay":"","x":770,"y":1200,"wires":[["2852077b.af8aa8"]]},{"id":"363b5caf.bf44c4","type":"inject","z":"279b8956.27dfe6","name":"stop","topic":"","payload":"false","payloadType":"bool","repeat":"","crontab":"","once":false,"x":770,"y":1240,"wires":[["2852077b.af8aa8"]]},{"id":"180d570c.93b059","type":"ui_group","z":"","name":"Devices","tab":"493cf398.76af9c","order":2,"disp":false,"width":"6"},{"id":"493cf398.76af9c","type":"ui_tab","z":"","name":"Ratby Road","icon":"dashboard"}]
Have lost two evenings figuring out why I got no sound at all. At the end I read some review of my USB microphone: I had to scream loud at 2 inch distance from my USB microphone, to be able to hear something. So I wouldn't suggest anybody to buy such a microphone
-
I use the 'microphone' node (from node-red-contrib-micropi) to get audio samples from the USB microphone on my Raspberry. That node gives a stream of data chunks containing raw audio samples (PCM). I don't use the Micropi node (from the same node set), since that node - like Glenn already discovered - only starts playing data when you send a stop message!
Remark: those nodes keep on streaming audio when you (re)deploy your flow, which is annoying since you will end up with a mix of multiple streams. I created a pull request to stop the streams automatically at deploys.
-
However when you send such raw audio chunks to audio-out node in the dashboard, the browser cannot decode the raw audio samples. I just doesn't know what all the bytes mean, so I had to develop the node-red-contrib-wav-headers, which adds WAV headers to the raw audio data.
Remark: I haven't published this node on NPM yet, so you can install it currently like this:
npm install bartbutenaers/node-red-contrib-wav-headers
Remark: the readme file of this node contains an introduction to PCM, WAV, ... to get you started.
Remark: the disadvantage is that you have to enter the same settings in both the microphone node and the wav node. Would have been better if the microphone node would be able to produce a WAV stream....
-
We now have WAV audio chunks which can be decoded by the browser. But the audio-out node currently can only play a single audio file:
try { audiocontext = audiocontext || new AudioContext(); var source = audiocontext.createBufferSource(); var buffer = new Uint8Array(msg.audio); audiocontext.decodeAudioData(buffer.buffer, function(buffer) { source.buffer = buffer; source.connect(audiocontext.destination); source.start(0); }) } catch(e) { alert("Error playing audio: "+e); }
Indeed the decodeAudioData seems not to be designed for chunked audio stream, i.e. you need to feed it with a complete audio file of finite length. But I found a workaround, and I changed the dashboard code like this:
try { audiocontext = audiocontext || new AudioContext(); var source = audiocontext.createBufferSource(); var buffer = new Uint8Array(msg.audio); audiocontext.decodeAudioData(buffer.buffer, function(audioBuffer) { audioStack.push(audioBuffer); while ( audioStack.length) { var chunkBuffer = audioStack.shift(); var source = audiocontext.createBufferSource(); source.buffer = chunkBuffer; source.connect(audiocontext.destination); if (nextTime == 0) { nextTime = audiocontext.currentTime + 0.01; } source.start(nextTime); // Make the next buffer wait the length of the last buffer before being played nextTime += source.buffer.duration; } }, function(e) { console.log("Error decoding audio: "+e); }); } catch(e) { console.log("Error playing audio: "+e); }
Remark: the audioStack variable is declared here:
app.controller('MainController', ['$mdSidenav', '$window', 'UiEvents', '$location', '$document', '$mdToast', '$mdDialog', '$rootScope', '$sce', '$timeout', '$scope', function ($mdSidenav, $window, events, $location, $document, $mdToast, $mdDialog, $rootScope, $sce, $timeout, $scope) { var audioStack = [];
Remark: I had to change the 'alert' in the catch to 'console.log', otherwise - in case the stream cannot be decoded - an error popup would be displayed for every chunk!
Remark: the inventor of this mechanism added a 50ms latency 'to work well across systems'. I assume a constant value is fine, but otherwise a number input field should be added to the audio-out node's config screen?
This was only a proof of concept, but - due to a lack of time - there are still some TODO's:
- Some extra tests would need to be done. Would be nice if somebody else could do this ... E.g. I did a basic test whether the audio-out node can still play an audio file like in the original version, but perhaps other tests are required.
- I haven't done any performance tests yet.
- Some extra nodes should be developed, e.g. to convert the PCM chunks to MP3 chunks. Because currently a lot of data is transferred to the browser !! I have already developed some stuff, but I have not enough time to finish it all in a short period of time ...
- And the most difficult challenge: convince Dave to update the audio-out node. Still have to figure out how to accomplish that
Shoot !!!
Bart
Well as long as it doesnāt break existing users then happy to look at and try a PR.