[Announce] node-red-contrib-voice2json (beta)

I can only spend limited time to get it to run, here is what I did and what's my current state:

  • I use tasker to send my voice / commands to the raspi. I used your example flow. The sox conversation to wave works well... At least in my tests when I write it to raspi SD card and used VLC to play it. (http://puu.sh/G3e1I/d065b5d3a2.png)
  • I installed the Deepspeak german profile package (using the official doc). "Train-profile" worked whlie "voice2json transcribe-stream" did throw an "ALSA Error" properly because I dont have a microphone for my raspi (http://puu.sh/G3e2w/79a3b6a846.png). So I tried "voice2json transcribe-wav /home/pi/Downloads/test.wav" while test.wav is a wave file I saved with the sox converter. This throws me the same error (http://puu.sh/G3e5q/55cf914844.png) as in my last post AND as I get when I use the sox-convert-example, which I configured to use the DeepSpeech sentences.ini.(I have tom mention that I didn t add slots to the slots tab... is this necessary on this state of testing?)
    I am very sorry when I did some dumb mistakes. I hope Im getting closer to make ist work :slight_smile: