From this we can conclude that Python does it very well, both for capturing the audio from the USB microphone and for playing that audio on your speaker. So Python does somehow reassemble the audio chunks fine. Is there a large delay perhaps: because that would indicate that they use a buffer that is large enough. When that is not the case they do some good processing of the chunks.
So we need to know which of those two part (capture or playing) fails in Node-RED, by two extra experiments.
By doing this, we only replace the python library (which captures the usb microphone audio) by the node-red-contrib-micropi node. If this works well, then we at least know that the micropi node is doing a good job.
By doing this we only replace the python library (which plays the audio on the speaker) by the ui-audio-out node. I don't expect this to give good audio quality, by the web-audio issue that I described above....
From your experiment (let's call it experiment 4) I conclude that the node-red-contrib-micropi node captures the audio very well, since it results in good audio after playback.
For some reason the dashboard audio-out ui node is able to play it rather good: I assume the chunks arrive in a pretty decently order, otherwise I cannot explain this. But this won't be the case always.
You could do also experiment 5:
Because it seems that mqtt introduces timing issues, which is just too much for the dashboard audio-out node to be able to play it correctly.
that could indeed be a quick workaround.
But if nobody diggs into this problem, we keep on getting the same issue until ethernity.
Would be nice if we could have a complete Node-RED solution...
Because there are enough use cases for this.
We need a player that can handle infinite streams of audio chunks, and can seemless play them sequentially. But it should support smoothing of steep edges when the next arrives not in time.
Because it takes too much time to develop something yourself. Some things I tried in the past:
Mostly they refer to solutions where you schedule the chunks via the web audio api. But I got still way too much distortions
Tried a datasource that created continiously zero values, and then overwrite those zeros by real audio values when available. This was much better compared to solution 1, but still bad on Android.
I haven't tried smoothing the edges due to a lack of free time.
But not sure if a library like e.g. pcm-player can solve our issue, because - when looking at the code - it only applies fading when being flushed (by default every second) but not in between (and steep edges can happen everywhere). But I haven't analyzed it in detail...
I tested for this case.
I found an importance thing which you said before: If I change the buffer size (frames_per_buffer) to the "proper value", the sound is smoothler than other value. And I got the same quality as Experiment 1.
Maybe the issue here is the "proper" buffer size?
But I've changed that value (1024 bytes, 2048 bytes, 4096 bytes, ...) for Experiment 2, but I couldn't get clear sound...
I assume frames_per_buffer is a setting of the python player? And what do you mean by "proper value"?
A larger player buffer means that mostly all segments will be available at the time the player will start playing them. So less steep edges...
I'm trying to rewrite python code, especially about the number_of_frame_per_buffer and processing time on Raspi Zero.
Maybe it takes too much time for capturing or/and playing audio. And it can't not meet the speed of receiving data. Finally we lose the audio data....
I will update the information if I have better result.
Don't know if you have seen this discussion. @tve has been so kind to create a very basic pcm player node in Flexdash for us yesterday evening.
Flexdash is a new dashboard for Node-RED with lot of potential. It is currently in alfa status, but it is already quite stable. You can run it in parallel to your old dashboard. So you can always give it a try if you have time. Constructive feedback is very welcome!!!
Did not have time yet to digg into the code of that player node, so perhaps it doesn't completely fits your use case....
I sincerely appreciate your kindness and support, and also for @tve's support too.
I will be sure to try it soon. I'm very new in Node-red, also in audio processing, so it takes me much time to understand what you said ...
You have been testing the audio-out node, which allows you to play an audio file in your dashboard (which you access via http(s)://your_hostname:1880/ui). But we know that this node is designed to play 1 file, and can cause a lot of noise if you start pushing a live stream of audio chunks to it.
So would be nice if we could allow the dashboard to play such live streams.
However the current official dashboard that you use, is developed in AngularJs which is not supported anymore (although of course the Node-RED dashboard is still nicely supported). Therefore Thorsten has developed a new dashboard (named Flexdash) based on VueJs. Moreover Flexdash offers more features compared to the current dashboard.
Since I have no time to support 2 dashboards, I will now start focussing on Flexdash. So Thorsten has added (experimental) support for audio chunks in Flexdash.
So if you have time to install Flexdash, you could test that new feature...