Creating a node for sending / receiving data via sound card!

Hello fellow members, after the whole night of thinking, first I want to thank @cinhcent for explaining to me why Quiet-bundle won`t work for this node!

Node idea:
The nodes should function exactly like this webpage!
Okay so there should be two nodes(obvious) one for receiving, and one for sending the audio signal.
It should have several settings as for example frequency, frame length, gain and etc.

I saw some python examples, should it be okay to execute a python script from the node itself?
If anyone has any idea please list it, I would be happy to discuss it!

executing a python script from Node-RED is certainly possible.
So if you have a python script that does everything you want, it is probably the easiest solution to just use that. Of course, depending on the data you want to transfer you have to figure out how to pass the data to the python script.

I will probably use an arguments to pass certain data and then the python will catch it.
Example:

python send.py "Message to send" "44100' 'xxx'....

If you want to have a look at how working with node.js child process works in a nodered contrib node to execute processes with audio output/input and what an interface with those processes and all those settings for audio in a nodes ui could look like, have a look at my sox record and play node that is in beta right now node here:

It should be pretty similar in regards to some strategies and ui elements although it of course is made to work on an alsa/sox level and not something like pyaudio.
It includes things like dynamically populated dropdowns for sound cards and sub devices that are available and different output input formats like file paths or buffer data in payloads.

Maybe there is some good inspiration there for you, Johannes

1 Like

I am stuck, anyone wants to help? :smiley:

I'm not sure if I understood the intention of your post or not.
Are suggesting this idea hoping that someone will develop the expected node, or are you volunteering yourself to develop the node and want the community to help you in this journey?
In case you want to develop it, I suggest you to start by this tutorial and start making your first nodes by implementing some simple functionalities (like simple math functions). After that you can go for some integrations with external libraries and then finally try to integrate with Quiet.js.
In case you have any question or problem during the journey don't worry, we will be here to support you.

1 Like

Hi Tiago,
I have already been developing a node together with Vukmirovic, so I assume he will continue to become a hardcore Node-RED node developer :wink:

@lizzardguki: Can you explain please a bit more why you want to use python? I see that there is no decent npm package available. But I see that quiet-js is a Javascript wrapper around libquiet, which means the C/C++ has been compiled by emscripten to asm.js or wasm (wasm) to be able to run in a browser. Since NodeJs version 8 wasm should be supported (without experimental command line options). But from the quiet.js github readme it is not clear to me at first sight whether they use asm or wasm...

Anyway I would expect you to directly via Javascript, so not sure why you should start using python?
Bart

1 Like

When looking at the quiet-emscripten.js file, it seems to be ASM:

image

Since ASM is a subset of Javascript (which means it is plain Javascript but with not all features of the language), I assume you should be able to call it from your custom node on the server side also in NodeJS...

But if you look at the quiet.js wrapper code, it is pretty low level stuff. For example:

var opt = Module.ccall('quiet_encoder_profile_str', 'pointer', ['array', 'array'], [c_profiles, c_profile]);

Personally I would not try to go that way, because you won't get much help on this forum anymore :wink:
So my advise is: use the high level functions from the quiet.js wrapper itself ...

The problem is that quiet.js makes use of the audio api of a browser. Therefore, one cannot use the high-level functions from quiet.js directly. Since he was saying in the beginning that it is working with python already, the easiest solution would be to just use that python script.
The "clean" solution would be to port quiet.js to use an audio api that is available in node.js (Haven't you done something like that, Bart?)

1 Like

Hi @cinhcet, thanks for the extra information! I have no clue how I should that, so I'm pretty sure I have no existing code snippet available in my pocket :wink:

I had in mind that you have written a node that plays sound or records a microphone, but I might mix that up :slight_smile:

1 Like

I have developed some nodes, check out my Github.
But all of those nodes were quite simple to build. Require an npm package, and just call its functions.

On the other end, quiet-js is different, so I think calling a python version would be easier than editing the whole quiet-js to output a sound buffer and then calling some Audio API to play that buffer from within the node.
Don`t get me started on how complex (at least for me) the listen node should be!

I am just a little nervous because the python script hasn't been updated for several years now. And I can`t get it to work on my machine.

Have you had a look at minimodem as its commandline it should be pretty easy to build a wrapper node around it. Although it does the data transfer in the audible range.

1 Like

Yeah , i am currently testing it :smiley:

2 Likes

As it accepts an source/destination Argument for Alsa you could use something like the device dropdown i build for my sox node. The biggest disadvantage would be that your limiting your nodes to linux if you go down this road.
If you dont like the minimodem way i might have a go at it :see_no_evil: but you definitely have dips :+1:

1 Like

Ok i had a good play with two raspberry pies and minimodem. One with a speaker and the other with microphone. I don’t know what i would ever use it for but it sure is fun to send data over a speaker and receive it with a microphone on another raspberry across a room and all this just with sound.
After first evaluation A node around in a minimal version would have to support:

  • Baud Rate
  • Input/Output Device / optional write to / read from WAV file
  • Output Type: Raw Buffer / String
  • Confidence Threshold
  • Quiet / Verbose Mode
  • Volume
1 Like

I was going to suggest checking out Chirp as it could've been a nice alternative with support also for Arduino but sadly it seems to have been acquired by Sonos earlier this year: https://blog.adafruit.com/2020/02/13/chirp-has-been-acquired-by-sonos-chirp/

The SDK seems to still be in GitHub but not much point if it's been discontinued. Damn these corporate acquisitions are depressing every time I encounter them...

3 Likes

Interestingly there seems to be a video demonstrating Arduino communication on the minimodem site but the linked blog post doesn't exist anymore.

1 Like

Yes unfortunately all the projects that existed on arduino for bell type 202 communication seem to be dead :slightly_frowning_face:
Yes sonos is high on my don’t buy list ever since they bought and dismantled snips. Although the death of snips sparked my interest in bootstrapping my own voice assistant with nodered and eventually led me to my cooperation with @BartButenaers and my first own nodes in development.
So maybe i should thank them :laughing:

3 Likes