[Update] node-red-contrib-voice2json (beta)

continuing from [Announce] node-red-contrib-voice2json (beta)

Bump to 0.6.1

As I now have some more time on my hands again the work continues :slight_smile:

Bump to 0.6.1 adds an important fix for a possible race condition that could happen on slower pi s like the 3 a+ where voice2json would already prematurely close its stdin pipe before finishing causing an uncaught exception that would crash nodered. This would especially happen if voice2json only detected silence or had a timeout.
So please do update from github if you are using transcribe-stream.

Roadmap for the next few months:

  • add support for editing of profile.yml directly from the configuration screen
  • hopefully publish to npm at the end of the year

Hope everybody is well,
Johannes

5 Likes

Better sooner than later :raised_hands::

Update to 0.7.0

  • added a tab to the config node that lets you edit the profile.yml of a voice2json profile straight from nodered to for example change your wake word or similar
  • as the profile.yml file is in yaml and node-red has ace support its even with syntax highlighting
  • some bug fixing a proofing of the file saving process on deploy that should hopefully further prevent unwanted over writing of files

I would be really happy if somebody could test this is my capacity for testing it a lot myself is not huge right now :see_no_evil:

Johannes

Edit:

!!!important!!! There was an extra space that somehow slipped into the config nodes js file that effectively broke the node. I fixed it now but if you pulled the update in the last few days please update again!

1 Like

Bump to 0.7.1

This update brings enhancements to the voice2json wait-wake wake-word node:

  • the node now supports a number of additional control messages:
    • pause: pauses the wait-wake node for example to realize a mute function without actually stopping the flow
    • forward & stop_forward enable the forwarding of audio to for example the stt process without a wake-word detection. A use case would be to trigger speech recognition with a physical button or a ui button in addition to the wake-word
  • the new modes are fully compatible with the current modes and can be combined
  • This enables a number of new use cases and makes the wait-wake node more flexible. For example it simplifies the building of multi step voice interactions after one wake-word.

Johannes

This topic was automatically closed after 60 days. New replies are no longer allowed.