Streaming live audio data via MQTT

Hi @krambriw ,

Thank for your reply.

Exactly! And when I test like the below example ( in the first post), the result is OK.

That means if I do all in python or NR javascript, it is OK,
But for example, if I send audio by NR and receive audio by python, it is not OK.
And it confused me.

Thank you for explaining it so well!

Experiment 1

From this we can conclude that Python does it very well, both for capturing the audio from the USB microphone and for playing that audio on your speaker. So Python does somehow reassemble the audio chunks fine. Is there a large delay perhaps: because that would indicate that they use a buffer that is large enough. When that is not the case they do some good processing of the chunks.

So we need to know which of those two part (capture or playing) fails in Node-RED, by two extra experiments.

Experiment 2

Perhaps you did this experiment already:

By doing this, we only replace the python library (which captures the usb microphone audio) by the node-red-contrib-micropi node. If this works well, then we at least know that the micropi node is doing a good job.

Experiment 3

By doing this we only replace the python library (which plays the audio on the speaker) by the ui-audio-out node. I don't expect this to give good audio quality, by the web-audio issue that I described above....


Hi Bart,
Thank you for your time on my post!

I didn't use Node "Add headers" for experiment 2. I send raw audio data directly to mqtt-out.

I tested all the experiment above.
The quality order is Experiment 1 > Experiment 3 > Experiment 2 .

I also tested for this case. The audio is very clear.

Does it have any meaning? Does it mean the NR library is good enough for capturing and playing audio?

From your experiment (let's call it experiment 4) I conclude that the node-red-contrib-micropi node captures the audio very well, since it results in good audio after playback.
For some reason the dashboard audio-out ui node is able to play it rather good: I assume the chunks arrive in a pretty decently order, otherwise I cannot explain this. But this won't be the case always.

You could do also experiment 5:


Because it seems that mqtt introduces timing issues, which is just too much for the dashboard audio-out node to be able to play it correctly.

1 Like

Experiment 1: only Python & MQTT involved, no distortion added, best quality nbr 1

Experiment 2: Input from mic captured by NR adding distortion, then transferred via MQTT, Python at receiving end not able to "repair" distorted sound, quality nbr 3

Experiment 3: Input from mic captured by Python, then transferred via MQTT, NR at receiving adding distortion, quality nbr 2

Maybe use Python on both ends before transferring to NR -> speaker

1 Like

Hey Walter,
that could indeed be a quick workaround.
But if nobody diggs into this problem, we keep on getting the same issue until ethernity.
Would be nice if we could have a complete Node-RED solution...
Because there are enough use cases for this.

1 Like

We need a player that can handle infinite streams of audio chunks, and can seemless play them sequentially. But it should support smoothing of steep edges when the next arrives not in time.

Because it takes too much time to develop something yourself. Some things I tried in the past:

  1. Mostly they refer to solutions where you schedule the chunks via the web audio api. But I got still way too much distortions
  2. Tried a datasource that created continiously zero values, and then overwrite those zeros by real audio values when available. This was much better compared to solution 1, but still bad on Android.
  3. I haven't tried smoothing the edges due to a lack of free time.

But not sure if a library like e.g. pcm-player can solve our issue, because - when looking at the code - it only applies fading when being flushed (by default every second) but not in between (and steep edges can happen everywhere). But I haven't analyzed it in detail...

I tested for this case.
I found an importance thing which you said before: If I change the buffer size (frames_per_buffer) to the "proper value", the sound is smoothler than other value. And I got the same quality as Experiment 1.

Maybe the issue here is the "proper" buffer size?
But I've changed that value (1024 bytes, 2048 bytes, 4096 bytes, ...) for Experiment 2, but I couldn't get clear sound...

I assume frames_per_buffer is a setting of the python player? And what do you mean by "proper value"?
A larger player buffer means that mostly all segments will be available at the time the player will start playing them. So less steep edges...

Link to the python audio player/capture missing?? I don't see that

Do you mean google drive link? I just checked it, and it is still OK.

I tested for some value of frames_per_buffer ( of the python player) and there is a value which reproduces best quality of audio.

1 Like

Yes, the python player you refering to, I guess is a script

I just shared the sample audio only.

Python program is based on this document

And this is my python code

The online demo of the pcm-player works very good on my Android phone. So I will try to integrate it in Node-RED...

1 Like

Thank you so much!

I'm trying to rewrite python code, especially about the number_of_frame_per_buffer and processing time on Raspi Zero.
Maybe it takes too much time for capturing or/and playing audio. And it can't not meet the speed of receiving data. Finally we lose the audio data....

I will update the information if I have better result.

Don't know if you have seen this discussion. @tve has been so kind to create a very basic pcm player node in Flexdash for us yesterday evening.

Flexdash is a new dashboard for Node-RED with lot of potential. It is currently in alfa status, but it is already quite stable. You can run it in parallel to your old dashboard. So you can always give it a try if you have time. Constructive feedback is very welcome!!!

Did not have time yet to digg into the code of that player node, so perhaps it doesn't completely fits your use case....

1 Like

I sincerely appreciate your kindness and support, and also for @tve's support too.
I will be sure to try it soon. I'm very new in Node-red, also in audio processing, so it takes me much time to understand what you said ...

Will try to explain it a bit in more detail:

You have been testing the audio-out node, which allows you to play an audio file in your dashboard (which you access via http(s)://your_hostname:1880/ui). But we know that this node is designed to play 1 file, and can cause a lot of noise if you start pushing a live stream of audio chunks to it.

So would be nice if we could allow the dashboard to play such live streams.

However the current official dashboard that you use, is developed in AngularJs which is not supported anymore (although of course the Node-RED dashboard is still nicely supported). Therefore Thorsten has developed a new dashboard (named Flexdash) based on VueJs. Moreover Flexdash offers more features compared to the current dashboard.

Since I have no time to support 2 dashboards, I will now start focussing on Flexdash. So Thorsten has added (experimental) support for audio chunks in Flexdash.

So if you have time to install Flexdash, you could test that new feature...


This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.