Ways to control Node-RED with speech commands

A lot has changed and I think I found the perfect speech companion to nodered:


This is an opensource project by the same person that also makes the Rhassphy assistant project (https://rhasspy.readthedocs.io/en/latest/).
It includes most of the core features of the later but in a stripped down version as a commandline tool.
Installation couldn’t be any easier as their are prebuilt deb packages for download. Their is already support for many languages and all you have to do is download a profile to get one of them.
Many of the languages support kaldi models by the great https://zamia.org/ project. Normally its a huge pain to try to adapt a kaldi model to your own domain specific language model but voice2json takes all this away and makes it a really easy and straightforward process.
As Kaldi can achieve sub realtime performance on a raspberry pi 4 especially if its a little bit overclocked and is way better accuracy wise than pockesphinx this is awesome news.
You can create your intents by writing them in a very easy to understand template language that is based on the jsgf grammar format.
It took me less than two hours to move all my intents I had in my pocketsphinx language model to this template language.
Im amazed how well this tool works out of the box.
The best part it does not only do speech to text but also includes a tool for intent recognition that will parse the intent out of your command. Because its all commandline based you can easily integrate it using the exec node and as the name suggest it outputs all results in JSON format so its very easy to work with in Nodered.
The documentation is great.
So to round it up I would say this is the easiest to use and install fully offline linux speech to text/intent solution I have used and I recommend everybody go try it.

Stay healthy, Johannes

6 Likes