[Announce] node-red-contrib-voice2json (beta)

Hi,

sounds great, especially the fact that it starts transcribing right away :sunglasses:
Are there transient results as well? ^^
I guess this does not require the wav-header right? (I have both available, just in case).

I think for SEPIA I should still access the voice2json endpoint directly though ... but maybe I could offer a SEPIA node that just streams the audio buffer from any SEPIA client and then that could be connected to the v2j node (for whatever ideas that come up) :slight_smile:

When you talk about similar software I expect you mean something like Dragon NaturallySpeaking. This is not how voice2json works. Most modern speech recognition systems like Kaldi or deepspeech work by training a both an acoustic model as well as a statistical language model on a large corpora on audio plus the transcription of said audio utilizing corporas like the one by the mozilla commonvoice project or similar.
When you train voice2json only the statistical language model part gets trained on your sentences and slots. The acoustic model doesn’t get touched and shouldn’t need to so apart from your sentences that you defined no further input is needed.
For more details and also some limitations of the approach of projects like voice2json please do read the section about how the transcription works in the documentation of the nodes and for much more detail the linked white paper by Mike about the voice2json workflow.
So the training is in principle speaker independent but you results may very depending on the model used as it always depends on how well all genders, ages and accents were represented in the copora used to train the acoustic model.

For Usb microphones it really depends on your budget. You can achieve some good results with some of the cheap usb conference microphones but the often haven’t got the best signal to noise ratio and far field capabilities.
Than there is the respeaker mic array v2 / usb mic array which gives far superior results and has build in leds that can be controlled with some simple python scripts but it more expensive than a pi 4 by itself.

Johannes

1 Like

No only once a finished command was determined.

You would have to build your own endpoint service as voice2json by itself really only offers all the separate commandline tools to bootstrap a application and will always need the user to build the connecting infra structure around themselves. Which is one of the reasons i started the work on the nodes to make this easily doable with nodered.

Why don’t you offer a websocket or an mqtt topic to subscribe to to receive the audio like rhasspy or snips used to do? Very easy to connect from something like nodered and no extra node needed?

Johannes

I've seen you node code and I think I could adapt the SEPIA STT server to use those terminal commands instead of the Kaldi ones. Lets see :slight_smile:

Actually this is what the SEPIA STT server is using. To be more precise it sends the audio buffer via socket and waits for results on the same channel.

Well I'm not making much progress just trying to run the voice2json with my usb mic. It calls arecord, but arecord fails with mymic ( a Shure MV5) with the parameters is sends - in particular -c 1. If I remove that arecord records fine. I've opened an issue in the github page for voice2json and I'll see what synesthesiam has to say.

Hmm the 1 channel Mono Format is what the speech transcription systems like Kaldi use so Mike will not have much choice there. You can change the command it calls in the profile.yml i think. If sox works with your mic you could use that instead to be called by voice2json or use the stdin argument and pipe the audio using unix pipes directly.

PS:
Or use the voice2json nodes and the sox convert node to convert to the right format after recording if it doesn’t accept a mono setting straight away while recording :wink:

ok, I bought the "Respeaker 4 Mic Array" but not the V2.0. This one: https://respeaker.io/4_mic_array/
Recording with audacity works as "arecord -Dac108 -f S32_LE -r 16000 -c 4 a.wav" / "aplay a.wav" does. The loudness could be better is there a way to boost it ?
Now I am struggeling with the integration:
A custom wake word is what I want so I ended up withan error while "Source" installation here:
source
ERROR: Could not build wheels for scipy which use PEP 517 and cannot be installed directly
Log:

pi@raspi4B:~/mycroft-precise $ sudo ./setup.sh
Reading package lists... Done
Building dependency tree       
Reading state information... Done
curl is already the newest version (7.64.0-4+deb10u1).
cython is already the newest version (0.29.2-2).
libatlas-base-dev is already the newest version (3.10.3-8+rpi1).
libhdf5-dev is already the newest version (1.10.4+repack-10).
libopenblas-dev is already the newest version (0.3.5+ds-3+rpi1).
libpulse-dev is already the newest version (12.2-4+deb10u1).
portaudio19-dev is already the newest version (19.6.0-1).
python3-h5py is already the newest version (2.8.0-3).
python3-scipy is already the newest version (1.1.0-7).
swig is already the newest version (3.0.12-2).
python3-pip is already the newest version (18.1-5+rpt1).
0 upgraded, 0 newly installed, 0 to remove and 5 not upgraded.
Looking in indexes: https://pypi.org/simple, https://www.piwheels.org/simple
Obtaining file:///home/pi/mycroft-precise/runner
Requirement already satisfied: pyaudio in ./.venv/lib/python3.7/site-packages (from precise-runner==0.3.1) (0.2.11)
Installing collected packages: precise-runner
  Attempting uninstall: precise-runner
    Found existing installation: precise-runner 0.3.1
    Uninstalling precise-runner-0.3.1:
      Successfully uninstalled precise-runner-0.3.1
  Running setup.py develop for precise-runner
Successfully installed precise-runner
Looking in indexes: https://pypi.org/simple, https://www.piwheels.org/simple
Obtaining file:///home/pi/mycroft-precise
Collecting numpy==1.16
  Using cached https://www.piwheels.org/simple/numpy/numpy-1.16.0-cp37-cp37m-linux_armv7l.whl (7.4 MB)
Collecting tensorflow<1.14,>=1.13
  Using cached https://www.piwheels.org/simple/tensorflow/tensorflow-1.13.1-cp37-none-linux_armv7l.whl (93.2 MB)
Collecting sonopy
  Using cached https://www.piwheels.org/simple/sonopy/sonopy-0.1.2-py3-none-any.whl (2.9 kB)
Requirement already satisfied: pyaudio in ./.venv/lib/python3.7/site-packages (from mycroft-precise==0.3.0) (0.2.11)
Collecting keras<=2.1.5
  Using cached Keras-2.1.5-py2.py3-none-any.whl (334 kB)
Collecting h5py
  Using cached https://www.piwheels.org/simple/h5py/h5py-2.10.0-cp37-cp37m-linux_armv7l.whl (4.7 MB)
Collecting wavio
  Using cached wavio-0.0.4-py2.py3-none-any.whl (9.0 kB)
Collecting typing
  Using cached https://www.piwheels.org/simple/typing/typing-3.7.4.3-py3-none-any.whl (28 kB)
Collecting prettyparse>=1.1.0
  Using cached https://www.piwheels.org/simple/prettyparse/prettyparse-1.1.0-py3-none-any.whl (3.7 kB)
Requirement already satisfied: precise-runner in ./runner (from mycroft-precise==0.3.0) (0.3.1)
Collecting attrs
  Using cached attrs-19.3.0-py2.py3-none-any.whl (39 kB)
Collecting fitipy<1.0
  Using cached https://www.piwheels.org/simple/fitipy/fitipy-0.1.2-py3-none-any.whl (1.9 kB)
Collecting speechpy-fast
  Using cached https://www.piwheels.org/simple/speechpy-fast/speechpy_fast-2.4-py3-none-any.whl (8.8 kB)
Collecting pyache
  Using cached pyache-0.2.0-py3-none-any.whl (7.6 kB)
Collecting tensorflow-estimator<1.15.0rc0,>=1.14.0rc0
  Using cached tensorflow_estimator-1.14.0-py2.py3-none-any.whl (488 kB)
Collecting six>=1.10.0
  Using cached six-1.15.0-py2.py3-none-any.whl (10 kB)
Collecting tensorboard<1.14.0,>=1.13.0
  Using cached tensorboard-1.13.1-py3-none-any.whl (3.2 MB)
Collecting keras-applications>=1.0.8
  Using cached Keras_Applications-1.0.8-py3-none-any.whl (50 kB)
Collecting keras-preprocessing>=1.0.5
  Using cached Keras_Preprocessing-1.1.2-py2.py3-none-any.whl (42 kB)
Collecting termcolor>=1.1.0
  Using cached https://www.piwheels.org/simple/termcolor/termcolor-1.1.0-py3-none-any.whl (4.8 kB)
Collecting wrapt>=1.11.1
  Using cached https://www.piwheels.org/simple/wrapt/wrapt-1.12.1-cp37-cp37m-linux_armv7l.whl (68 kB)
Collecting gast>=0.2.0
  Using cached gast-0.4.0-py3-none-any.whl (9.8 kB)
Collecting google-pasta>=0.1.6
  Using cached google_pasta-0.2.0-py3-none-any.whl (57 kB)
Collecting astor>=0.6.0
  Using cached astor-0.8.1-py2.py3-none-any.whl (27 kB)
Collecting grpcio>=1.8.6
  Using cached https://www.piwheels.org/simple/grpcio/grpcio-1.31.0-cp37-cp37m-linux_armv7l.whl (29.8 MB)
Requirement already satisfied: wheel>=0.26 in ./.venv/lib/python3.7/site-packages (from tensorflow<1.14,>=1.13->mycroft-precise==0.3.0) (0.34.2)
Collecting absl-py>=0.7.0
  Using cached https://www.piwheels.org/simple/absl-py/absl_py-0.9.0-py3-none-any.whl (121 kB)
Collecting protobuf>=3.6.1
  Using cached protobuf-3.12.4-py2.py3-none-any.whl (443 kB)
Collecting scipy
  Using cached scipy-1.5.2.tar.gz (25.4 MB)
  Installing build dependencies ... done
  Getting requirements to build wheel ... done
    Preparing wheel metadata ... done
Collecting pyyaml
  Using cached https://www.piwheels.org/simple/pyyaml/PyYAML-5.3.1-cp37-cp37m-linux_armv7l.whl (44 kB)
Collecting markdown>=2.6.8
  Using cached Markdown-3.2.2-py3-none-any.whl (88 kB)
Collecting werkzeug>=0.11.15
  Using cached Werkzeug-1.0.1-py2.py3-none-any.whl (298 kB)
Requirement already satisfied: setuptools in ./.venv/lib/python3.7/site-packages (from protobuf>=3.6.1->tensorflow<1.14,>=1.13->mycroft-precise==0.3.0) (49.3.1)
Collecting importlib-metadata; python_version < "3.8"
  Using cached importlib_metadata-1.7.0-py2.py3-none-any.whl (31 kB)
Collecting zipp>=0.5
  Using cached zipp-3.1.0-py3-none-any.whl (4.9 kB)
Building wheels for collected packages: scipy
  Building wheel for scipy (PEP 517) ... error
  ERROR: Command errored out with exit status 1:
   command: /home/pi/mycroft-precise/.venv/bin/python /home/pi/mycroft-precise/.venv/lib/python3.7/site-packages/pip/_vendor/pep517/_in_process.py build_wheel /tmp/tmpznsbqwrx
       cwd: /tmp/pip-install-t8aasp1f/scipy
  Complete output (696 lines):
  lapack_opt_info:
  lapack_mkl_info:
  customize UnixCCompiler
    libraries mkl_rt not found in ['/home/pi/mycroft-precise/.venv/lib', '/usr/local/lib', '/usr/lib', '/usr/lib/arm-linux-gnueabihf']
    NOT AVAILABLE
  
  openblas_lapack_info:
  customize UnixCCompiler
  customize UnixCCompiler
  customize UnixCCompiler
  C compiler: arm-linux-gnueabihf-gcc -pthread -DNDEBUG -g -fwrapv -O2 -Wall -g -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC
  
  creating /tmp/tmp16otweue/tmp
  creating /tmp/tmp16otweue/tmp/tmp16otweue
  compile options: '-c'
  arm-linux-gnueabihf-gcc: /tmp/tmp16otweue/source.c
  arm-linux-gnueabihf-gcc -pthread /tmp/tmp16otweue/tmp/tmp16otweue/source.o -lopenblas -o /tmp/tmp16otweue/a.out
  customize UnixCCompiler
    FOUND:
      libraries = ['openblas', 'openblas']
      library_dirs = ['/usr/lib/arm-linux-gnueabihf']
      language = c
      define_macros = [('HAVE_CBLAS', None)]
  
    FOUND:
      libraries = ['openblas', 'openblas']
      library_dirs = ['/usr/lib/arm-linux-gnueabihf']
      language = c
      define_macros = [('HAVE_CBLAS', None)]
  
  blas_opt_info:
  blas_mkl_info:
  customize UnixCCompiler
    libraries mkl_rt not found in ['/home/pi/mycroft-precise/.venv/lib', '/usr/local/lib', '/usr/lib', '/usr/lib/arm-linux-gnueabihf']
    NOT AVAILABLE
  
  blis_info:
  customize UnixCCompiler
    libraries blis not found in ['/home/pi/mycroft-precise/.venv/lib', '/usr/local/lib', '/usr/lib', '/usr/lib/arm-linux-gnueabihf']
    NOT AVAILABLE
  
  openblas_info:
  customize UnixCCompiler
  customize UnixCCompiler
  customize UnixCCompiler
    FOUND:
      libraries = ['openblas', 'openblas']
      library_dirs = ['/usr/lib/arm-linux-gnueabihf']
      language = c
      define_macros = [('HAVE_CBLAS', None)]
  
    FOUND:
      libraries = ['openblas', 'openblas']
      library_dirs = ['/usr/lib/arm-linux-gnueabihf']
      language = c
      define_macros = [('HAVE_CBLAS', None)]
  
  [makenpz] scipy/special/tests/data/boost.npz not rebuilt
  [makenpz] scipy/special/tests/data/gsl.npz not rebuilt
  [makenpz] scipy/special/tests/data/local.npz not rebuilt
  non-existing path in 'scipy/signal/windows': 'tests'
  running bdist_wheel
  running build
  running config_cc
  unifing config_cc, config, build_clib, build_ext, build commands --compiler options
  running config_fc
  unifing config_fc, config, build_clib, build_ext, build commands --fcompiler options
  running build_src
  build_src
  building py_modules sources
  building library "mach" sources
  building library "quadpack" sources
  building library "lsoda" sources
  building library "vode" sources
  building library "dop" sources
  building library "fitpack" sources
  building library "fwrappers" sources
  building library "odrpack" sources
  building library "minpack" sources
  building library "rectangular_lsap" sources
  building library "rootfind" sources
  building library "superlu_src" sources
  building library "arpack_scipy" sources
  building library "sc_cephes" sources
  building library "sc_mach" sources
  building library "sc_amos" sources
  building library "sc_cdf" sources
  building library "sc_specfun" sources
  building library "statlib" sources
  building extension "scipy.cluster._vq" sources
  building extension "scipy.cluster._hierarchy" sources
  building extension "scipy.cluster._optimal_leaf_ordering" sources
  building extension "scipy.fft._pocketfft.pypocketfft" sources
  building extension "scipy.fftpack.convolve" sources
  building extension "scipy.integrate._quadpack" sources
  building extension "scipy.integrate._odepack" sources
  building extension "scipy.integrate.vode" sources
  f2py options: []
    adding 'build/src.linux-armv7l-3.7/build/src.linux-armv7l-3.7/scipy/integrate/fortranobject.c' to sources.
    adding 'build/src.linux-armv7l-3.7/build/src.linux-armv7l-3.7/scipy/integrate' to include_dirs.
    adding 'build/src.linux-armv7l-3.7/scipy/integrate/vode-f2pywrappers.f' to sources.
  building extension "scipy.integrate.lsoda" sources
  f2py options: []
    adding 'build/src.linux-armv7l-3.7/build/src.linux-armv7l-3.7/scipy/integrate/fortranobject.c' to sources.
    adding 'build/src.linux-armv7l-3.7/build/src.linux-armv7l-3.7/scipy/integrate' to include_dirs.
    adding 'build/src.linux-armv7l-3.7/scipy/integrate/lsoda-f2pywrappers.f' to sources.
  building extension "scipy.integrate._dop" sources
  f2py options: []
    adding 'build/src.linux-armv7l-3.7/build/src.linux-armv7l-3.7/scipy/integrate/fortranobject.c' to sources.
    adding 'build/src.linux-armv7l-3.7/build/src.linux-armv7l-3.7/scipy/integrate' to include_dirs.
    adding 'build/src.linux-armv7l-3.7/scipy/integrate/_dop-f2pywrappers.f' to sources.
  building extension "scipy.integrate._test_multivariate" sources
  building extension "scipy.integrate._test_odeint_banded" sources
  f2py options: []
    adding 'build/src.linux-armv7l-3.7/build/src.linux-armv7l-3.7/scipy/integrate/fortranobject.c' to sources.
    adding 'build/src.linux-armv7l-3.7/build/src.linux-armv7l-3.7/scipy/integrate' to include_dirs.
    adding 'build/src.linux-armv7l-3.7/scipy/integrate/_test_odeint_banded-f2pywrappers.f' to sources.
  building extension "scipy.interpolate.interpnd" sources
  building extension "scipy.interpolate._ppoly" sources
  building extension "scipy.interpolate._bspl" sources
  building extension "scipy.interpolate._fitpack" sources
  building extension "scipy.interpolate.dfitpack" sources
  f2py options: []
    adding 'build/src.linux-armv7l-3.7/build/src.linux-armv7l-3.7/scipy/interpolate/src/fortranobject.c' to sources.
    adding 'build/src.linux-armv7l-3.7/build/src.linux-armv7l-3.7/scipy/interpolate/src' to include_dirs.
    adding 'build/src.linux-armv7l-3.7/scipy/interpolate/src/dfitpack-f2pywrappers.f' to sources.
  building extension "scipy.io._test_fortran" sources
  f2py options: []
    adding 'build/src.linux-armv7l-3.7/build/src.linux-armv7l-3.7/scipy/io/fortranobject.c' to sources.
    adding 'build/src.linux-armv7l-3.7/build/src.linux-armv7l-3.7/scipy/io' to include_dirs.
  building extension "scipy.io.matlab.streams" sources
  building extension "scipy.io.matlab.mio_utils" sources
  building extension "scipy.io.matlab.mio5_utils" sources
  building extension "scipy.linalg._fblas" sources
  f2py options: []
    adding 'build/src.linux-armv7l-3.7/build/src.linux-armv7l-3.7/build/src.linux-armv7l-3.7/scipy/linalg/fortranobject.c' to sources.
    adding 'build/src.linux-armv7l-3.7/build/src.linux-armv7l-3.7/build/src.linux-armv7l-3.7/scipy/linalg' to include_dirs.
    adding 'build/src.linux-armv7l-3.7/build/src.linux-armv7l-3.7/scipy/linalg/_fblas-f2pywrappers.f' to sources.
  building extension "scipy.linalg._flapack" sources
  f2py options: []
    adding 'build/src.linux-armv7l-3.7/build/src.linux-armv7l-3.7/build/src.linux-armv7l-3.7/scipy/linalg/fortranobject.c' to sources.
    adding 'build/src.linux-armv7l-3.7/build/src.linux-armv7l-3.7/build/src.linux-armv7l-3.7/scipy/linalg' to include_dirs.
    adding 'build/src.linux-armv7l-3.7/build/src.linux-armv7l-3.7/scipy/linalg/_flapack-f2pywrappers.f' to sources.
  building extension "scipy.linalg._flinalg" sources
  f2py options: []
    adding 'build/src.linux-armv7l-3.7/build/src.linux-armv7l-3.7/scipy/linalg/fortranobject.c' to sources.
    adding 'build/src.linux-armv7l-3.7/build/src.linux-armv7l-3.7/scipy/linalg' to include_dirs.
  building extension "scipy.linalg._interpolative" sources
  f2py options: []
    adding 'build/src.linux-armv7l-3.7/build/src.linux-armv7l-3.7/scipy/linalg/fortranobject.c' to sources.
    adding 'build/src.linux-armv7l-3.7/build/src.linux-armv7l-3.7/scipy/linalg' to include_dirs.
  building extension "scipy.linalg._solve_toeplitz" sources
  building extension "scipy.linalg.cython_blas" sources
  building extension "scipy.linalg.cython_lapack" sources
  building extension "scipy.linalg._decomp_update" sources
  building extension "scipy.odr.__odrpack" sources
  building extension "scipy.optimize._minpack" sources
  building extension "scipy.optimize._lsap_module" sources
  building extension "scipy.optimize._zeros" sources
  building extension "scipy.optimize._lbfgsb" sources
  f2py options: []
    adding 'build/src.linux-armv7l-3.7/build/src.linux-armv7l-3.7/scipy/optimize/lbfgsb_src/fortranobject.c' to sources.
    adding 'build/src.linux-armv7l-3.7/build/src.linux-armv7l-3.7/scipy/optimize/lbfgsb_src' to include_dirs.
    adding 'build/src.linux-armv7l-3.7/scipy/optimize/lbfgsb_src/_lbfgsb-f2pywrappers.f' to sources.
  building extension "scipy.optimize.moduleTNC" sources
  building extension "scipy.optimize._cobyla" sources
  f2py options: []
    adding 'build/src.linux-armv7l-3.7/build/src.linux-armv7l-3.7/scipy/optimize/cobyla/fortranobject.c' to sources.
    adding 'build/src.linux-armv7l-3.7/build/src.linux-armv7l-3.7/scipy/optimize/cobyla' to include_dirs.
  building extension "scipy.optimize.minpack2" sources
  f2py options: []
    adding 'build/src.linux-armv7l-3.7/build/src.linux-armv7l-3.7/scipy/optimize/minpack2/fortranobject.c' to sources.
    adding 'build/src.linux-armv7l-3.7/build/src.linux-armv7l-3.7/scipy/optimize/minpack2' to include_dirs.
  building extension "scipy.optimize._slsqp" sources
  f2py options: []
    adding 'build/src.linux-armv7l-3.7/build/src.linux-armv7l-3.7/scipy/optimize/slsqp/fortranobject.c' to sources.
    adding 'build/src.linux-armv7l-3.7/build/src.linux-armv7l-3.7/scipy/optimize/slsqp' to include_dirs.
  building extension "scipy.optimize.__nnls" sources
  f2py options: []
    adding 'build/src.linux-armv7l-3.7/build/src.linux-armv7l-3.7/scipy/optimize/__nnls/fortranobject.c' to sources.
    adding 'build/src.linux-armv7l-3.7/build/src.linux-armv7l-3.7/scipy/optimize/__nnls' to include_dirs.
  building extension "scipy.optimize._group_columns" sources
  building extension "scipy.optimize._bglu_dense" sources
  building extension "scipy.optimize._lsq.givens_elimination" sources
  building extension "scipy.optimize._trlib._trlib" sources
  building extension "scipy.optimize.cython_optimize._zeros" sources
  building extension "scipy.signal.sigtools" sources
  building extension "scipy.signal._spectral" sources
  building extension "scipy.signal._max_len_seq_inner" sources
  building extension "scipy.signal._peak_finding_utils" sources
  building extension "scipy.signal._sosfilt" sources
  building extension "scipy.signal._upfirdn_apply" sources
  building extension "scipy.signal.spline" sources
  building extension "scipy.sparse.linalg.isolve._iterative" sources
  f2py options: []
    adding 'build/src.linux-armv7l-3.7/build/src.linux-armv7l-3.7/build/src.linux-armv7l-3.7/scipy/sparse/linalg/isolve/iterative/fortranobject.c' to sources.
    adding 'build/src.linux-armv7l-3.7/build/src.linux-armv7l-3.7/build/src.linux-armv7l-3.7/scipy/sparse/linalg/isolve/iterative' to include_dirs.
  building extension "scipy.sparse.linalg.dsolve._superlu" sources
  building extension "scipy.sparse.linalg.eigen.arpack._arpack" sources
  f2py options: []
    adding 'build/src.linux-armv7l-3.7/build/src.linux-armv7l-3.7/build/src.linux-armv7l-3.7/scipy/sparse/linalg/eigen/arpack/fortranobject.c' to sources.
    adding 'build/src.linux-armv7l-3.7/build/src.linux-armv7l-3.7/build/src.linux-armv7l-3.7/scipy/sparse/linalg/eigen/arpack' to include_dirs.
    adding 'build/src.linux-armv7l-3.7/build/src.linux-armv7l-3.7/scipy/sparse/linalg/eigen/arpack/_arpack-f2pywrappers.f' to sources.
  building extension "scipy.sparse.csgraph._shortest_path" sources
  building extension "scipy.sparse.csgraph._traversal" sources
  building extension "scipy.sparse.csgraph._min_spanning_tree" sources
  building extension "scipy.sparse.csgraph._matching" sources
  building extension "scipy.sparse.csgraph._flow" sources
  building extension "scipy.sparse.csgraph._reordering" sources
  building extension "scipy.sparse.csgraph._tools" sources
  building extension "scipy.sparse._csparsetools" sources
  building extension "scipy.sparse._sparsetools" sources
  [generate_sparsetools] 'scipy/sparse/sparsetools/bsr_impl.h' already up-to-date
  [generate_sparsetools] 'scipy/sparse/sparsetools/csr_impl.h' already up-to-date
  [generate_sparsetools] 'scipy/sparse/sparsetools/csc_impl.h' already up-to-date
  [generate_sparsetools] 'scipy/sparse/sparsetools/other_impl.h' already up-to-date
  [generate_sparsetools] 'scipy/sparse/sparsetools/sparsetools_impl.h' already up-to-date
  building extension "scipy.spatial.qhull" sources
  building extension "scipy.spatial.ckdtree" sources
  building extension "scipy.spatial._distance_wrap" sources
  building extension "scipy.spatial._voronoi" sources
  building extension "scipy.spatial._hausdorff" sources
  building extension "scipy.special.specfun" sources
  f2py options: ['--no-wrap-functions']
    adding 'build/src.linux-armv7l-3.7/build/src.linux-armv7l-3.7/scipy/special/fortranobject.c' to sources.
    adding 'build/src.linux-armv7l-3.7/build/src.linux-armv7l-3.7/scipy/special' to include_dirs.
  building extension "scipy.special._ufuncs" sources
  building extension "scipy.special._ufuncs_cxx" sources
  building extension "scipy.special._ellip_harm_2" sources
  building extension "scipy.special.cython_special" sources
  building extension "scipy.special._comb" sources
  building extension "scipy.special._test_round" sources
  building extension "scipy.stats.statlib" sources
  f2py options: ['--no-wrap-functions']
    adding 'build/src.linux-armv7l-3.7/build/src.linux-armv7l-3.7/scipy/stats/fortranobject.c' to sources.
    adding 'build/src.linux-armv7l-3.7/build/src.linux-armv7l-3.7/scipy/stats' to include_dirs.
  building extension "scipy.stats._stats" sources
  building extension "scipy.stats.mvn" sources
  f2py options: []
    adding 'build/src.linux-armv7l-3.7/build/src.linux-armv7l-3.7/scipy/stats/fortranobject.c' to sources.
    adding 'build/src.linux-armv7l-3.7/build/src.linux-armv7l-3.7/scipy/stats' to include_dirs.
    adding 'build/src.linux-armv7l-3.7/scipy/stats/mvn-f2pywrappers.f' to sources.
  building extension "scipy.ndimage._nd_image" sources
  building extension "scipy.ndimage._ni_label" sources
  building extension "scipy.ndimage._ctest" sources
  building extension "scipy.ndimage._ctest_oldapi" sources
  building extension "scipy.ndimage._cytest" sources
  building extension "scipy._lib._ccallback_c" sources
  building extension "scipy._lib._test_ccallback" sources
  building extension "scipy._lib._fpumode" sources
  building extension "scipy._lib.messagestream" sources
  get_default_fcompiler: matching types: '['gnu95', 'intel', 'lahey', 'pg', 'absoft', 'nag', 'vast', 'compaq', 'intele', 'intelem', 'gnu', 'g95', 'pathf95', 'nagfor']'
  customize Gnu95FCompiler
  Could not locate executable gfortran
  Could not locate executable f95
  customize IntelFCompiler
  Could not locate executable ifort
  Could not locate executable ifc
  customize LaheyFCompiler
  Could not locate executable lf95
  customize PGroupFCompiler
  Could not locate executable pgfortran
  customize AbsoftFCompiler
  Could not locate executable f90
  Could not locate executable f77
  customize NAGFCompiler
  customize VastFCompiler
  customize CompaqFCompiler
  Could not locate executable fort
  customize IntelItaniumFCompiler
  Could not locate executable efort
  Could not locate executable efc
  customize IntelEM64TFCompiler
  customize GnuFCompiler
  Could not locate executable g77
  customize G95FCompiler
  Could not locate executable g95
  customize PathScaleFCompiler
  Could not locate executable pathf95
  customize NAGFORCompiler
  Could not locate executable nagfor
  don't know how to compile Fortran code on platform 'posix'
  C compiler: arm-linux-gnueabihf-gcc -pthread -DNDEBUG -g -fwrapv -O2 -Wall -g -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC
  
  compile options: '-I/home/pi/mycroft-precise/.venv/include -I/usr/include/python3.7m -c'
  arm-linux-gnueabihf-gcc: _configtest.c
  arm-linux-gnueabihf-gcc -pthread _configtest.o -o _configtest
  success!
  removing: _configtest.c _configtest.o _configtest
  building extension "scipy._lib._test_deprecation_call" sources
  building extension "scipy._lib._test_deprecation_def" sources
  building extension "scipy._lib._uarray._uarray" sources
  building data_files sources
  build_src: building npy-pkg config files
  running build_py
  creating build/lib.linux-armv7l-3.7
  creating build/lib.linux-armv7l-3.7/scipy
  copying scipy/__init__.py -> build/lib.linux-armv7l-3.7/scipy
  copying scipy/_distributor_init.py -> build/lib.linux-armv7l-3.7/scipy
  copying scipy/conftest.py -> build/lib.linux-armv7l-3.7/scipy
  copying scipy/version.py -> build/lib.linux-armv7l-3.7/scipy
  copying scipy/setup.py -> build/lib.linux-armv7l-3.7/scipy
  copying build/src.linux-armv7l-3.7/scipy/__config__.py -> build/lib.linux-armv7l-3.7/scipy
..................

..................
  customize UnixCCompiler
  customize UnixCCompiler using build_clib
  building 'mach' library
  Running from SciPy source directory.
  /tmp/pip-build-env-3zkfiuu3/overlay/lib/python3.7/site-packages/numpy/distutils/system_info.py:716: UserWarning: Specified path /tmp/pip-build-env-3zkfiuu3/overlay/include/python3.7m is invalid.
    return self.get_paths(self.section, key)
  /tmp/pip-build-env-3zkfiuu3/overlay/lib/python3.7/site-packages/numpy/distutils/system_info.py:716: UserWarning: Specified path /usr/local/include/python3.7m is invalid.
    return self.get_paths(self.section, key)
  /tmp/pip-build-env-3zkfiuu3/overlay/lib/python3.7/site-packages/numpy/distutils/system_info.py:716: UserWarning: Specified path /home/pi/mycroft-precise/.venv/include/python3.7m is invalid.
    return self.get_paths(self.section, key)
  error: library mach has Fortran sources but no Fortran compiler found
  ----------------------------------------
  ERROR: Failed building wheel for scipy
Failed to build scipy
ERROR: Could not build wheels for scipy which use PEP 517 and cannot be installed directly

I tried differend things found by google but none of them worked. @JGKK have you been able to setup/train a custom wake word (Raspi4)?

EDIT: Solved by editing the install script "setup.sh" at line:
"if [ ! -x "$VENV/bin/python" ]; then python3 -m venv "$VENV" --without-pip; fi"
to
"if [ ! -x "$VENV/bin/python" ]; then python -m venv "$VENV" --without-pip; fi"
(As a note to the ones who may find this by google)
what a mess with this python 2.x and 3.x they have done....

They are two different products. The V2 is a Usb mic and not a pi hat but both are good :+1:

If you use something like sox than there is a gain parameter you can set while recording but in my test with the 4 mic pi hat mic from respeaker it worked fine without any extra gain in combination with things like voice2json.

Yes i have actually trained multiple wakeword on a raspberry pi 4. I didn’t have any problems installing on buster. I pretty much just followed their installation steps for installation from source in the precise repository and that worked for me. But I agree I’m not the biggest fan of the way they went using a python vent and so on but i think this is all due to it beeing based on tensorflow.

Johannes

2 Likes

OK, I ve no idea what I am doing wrong with the wake word training.
I did exactly what the example / wiki says:

https://github.com/MycroftAI/mycroft-precise/wiki/Training-your-own-wake-word

My wake word "scarlett" will be triggered even when my PC fan starts to spin or I say just "hello"
I am testing like this (like wiki):
precise-listen scarlett.net
I used the folliwing stucture (equal to wiki example):

mkdir ~/mycroft-precise/scarlett
#8 wake word samples here: 
mkdir ~/mycroft-precise/scarlett/wake-word
#630 none wake word smaples here formated to wav here: 
mkdir ~/mycroft-precise/scarlett/not-wake-word
mkdir ~/mycroft-precise/scarlett/test
#4 wake word samples here: 
mkdir ~/mycroft-precise/scarlett/test/wake-word
mkdir ~/mycroft-precise/scarlett/test/not-wake-word

I am just wondering why the creation of the training model is ready after about 30 seconds.
(precise-train -e 60 scarlett.net scarlett/)
Shouldn´t this take longer because of 0,9GB non wake word sampels I have in ~/mycroft-precise/scarlett/not-wake-word ?

This is the log:

(.venv) pi@raspi4B:~/mycroft-precise $ precise-train -e 60 scarlett.net scarlett/
Using TensorFlow backend.
WARNING:tensorflow:From /home/pi/mycroft-precise/.venv/lib/python3.7/site-packages/tensorflow/__init__.py:98: The name tf.AUTO_REUSE is deprecated. Please use tf.compat.v1.AUTO_REUSE instead.

WARNING:tensorflow:From /home/pi/mycroft-precise/.venv/lib/python3.7/site-packages/tensorflow/__init__.py:98: The name tf.AttrValue is deprecated. Please use tf.compat.v1.AttrValue instead.

WARNING:tensorflow:From /home/pi/mycroft-precise/.venv/lib/python3.7/site-packages/tensorflow/__init__.py:98: The name tf.COMPILER_VERSION is deprecated. Please use tf.version.COMPILER_VERSION instead.

WARNING:tensorflow:From /home/pi/mycroft-precise/.venv/lib/python3.7/site-packages/tensorflow/__init__.py:98: The name tf.CXX11_ABI_FLAG is deprecated. Please use tf.sysconfig.CXX11_ABI_FLAG instead.

WARNING:tensorflow:From /home/pi/mycroft-precise/.venv/lib/python3.7/site-packages/tensorflow/__init__.py:98: The name tf.ConditionalAccumulator is deprecated. Please use tf.compat.v1.ConditionalAccumulator instead.

Loading from scarlett.net...
WARNING:tensorflow:From /home/pi/mycroft-precise/.venv/lib/python3.7/site-packages/keras/backend/tensorflow_backend.py:3138: calling dropout (from tensorflow.python.ops.nn_ops) with keep_prob is deprecated and will be removed in a future version.
Instructions for updating:
Please use `rate` instead of `keep_prob`. Rate should be set to `rate = 1 - keep_prob`.
WARNING:tensorflow:From /home/pi/mycroft-precise/.venv/lib/python3.7/site-packages/keras/optimizers.py:757: The name tf.train.Optimizer is deprecated. Please use tf.compat.v1.train.Optimizer instead.

WARNING:tensorflow:From /home/pi/mycroft-precise/.venv/lib/python3.7/site-packages/tensorflow_core/python/ops/math_grad.py:1251: add_dispatch_support.<locals>.wrapper (from tensorflow.python.ops.array_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.where in 2.0, which has the same broadcast rule as np.where
Data: <TrainData wake_words=9 not_wake_words=634 test_wake_words=4 test_not_wake_words=0>
Loading wake-word...
Loading not-wake-word...
Loading wake-word...
Loading not-wake-word...
Inputs shape: (643, 29, 13)
Outputs shape: (643, 1)
Test inputs shape: (4, 29, 13)
Test outputs shape: (4, 1)
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
net (GRU)                    (None, 20)                2040      
_________________________________________________________________
dense_1 (Dense)              (None, 1)                 21        
=================================================================
Total params: 2,061
Trainable params: 2,061
Non-trainable params: 0
_________________________________________________________________
Train on 643 samples, validate on 4 samples
WARNING:tensorflow:From /home/pi/mycroft-precise/.venv/lib/python3.7/site-packages/keras/callbacks.py:774: The name tf.summary.merge_all is deprecated. Please use tf.compat.v1.summary.merge_all instead.

WARNING:tensorflow:From /home/pi/mycroft-precise/.venv/lib/python3.7/site-packages/keras/callbacks.py:777: The name tf.summary.FileWriter is deprecated. Please use tf.compat.v1.summary.FileWriter instead.

Epoch 441/500
643/643 [==============================] - 2s 4ms/step - loss: 0.3145 - acc: 0.9502 - val_loss: 1.2711 - val_acc: 0.5000
Epoch 442/500
643/643 [==============================] - 0s 244us/step - loss: 0.3652 - acc: 0.9425 - val_loss: 1.2718 - val_acc: 0.5000
Epoch 443/500
643/643 [==============================] - 0s 274us/step - loss: 0.2975 - acc: 0.9549 - val_loss: 1.2732 - val_acc: 0.5000
Epoch 444/500
643/643 [==============================] - 0s 300us/step - loss: 0.2931 - acc: 0.9502 - val_loss: 1.2713 - val_acc: 0.5000
Epoch 445/500
643/643 [==============================] - 0s 335us/step - loss: 0.4477 - acc: 0.9378 - val_loss: 1.2712 - val_acc: 0.5000
Epoch 446/500
643/643 [==============================] - 0s 337us/step - loss: 0.2211 - acc: 0.9627 - val_loss: 1.2712 - val_acc: 0.5000
Epoch 447/500
643/643 [==============================] - 0s 328us/step - loss: 0.2643 - acc: 0.9565 - val_loss: 1.2710 - val_acc: 0.5000
Epoch 448/500
643/643 [==============================] - 0s 299us/step - loss: 0.3820 - acc: 0.9425 - val_loss: 1.2710 - val_acc: 0.5000
Epoch 449/500
643/643 [==============================] - 0s 313us/step - loss: 0.3567 - acc: 0.9440 - val_loss: 1.2709 - val_acc: 0.5000
Epoch 450/500
643/643 [==============================] - 0s 337us/step - loss: 0.5083 - acc: 0.9316 - val_loss: 1.2706 - val_acc: 0.5000
Epoch 451/500
643/643 [==============================] - 0s 286us/step - loss: 0.4216 - acc: 0.9393 - val_loss: 1.2706 - val_acc: 0.5000
Epoch 452/500
643/643 [==============================] - 0s 285us/step - loss: 0.3492 - acc: 0.9487 - val_loss: 1.2706 - val_acc: 0.5000
Epoch 453/500
643/643 [==============================] - 0s 276us/step - loss: 0.3350 - acc: 0.9471 - val_loss: 1.2706 - val_acc: 0.5000
Epoch 454/500
643/643 [==============================] - 0s 321us/step - loss: 0.3289 - acc: 0.9471 - val_loss: 1.2706 - val_acc: 0.5000
Epoch 455/500
643/643 [==============================] - 0s 284us/step - loss: 0.4285 - acc: 0.9393 - val_loss: 1.2706 - val_acc: 0.5000
Epoch 456/500
643/643 [==============================] - 0s 379us/step - loss: 0.3820 - acc: 0.9440 - val_loss: 1.2705 - val_acc: 0.5000
Epoch 457/500
643/643 [==============================] - 0s 363us/step - loss: 0.3798 - acc: 0.9471 - val_loss: 1.2705 - val_acc: 0.5000
Epoch 458/500
643/643 [==============================] - 0s 342us/step - loss: 0.3606 - acc: 0.9471 - val_loss: 1.2705 - val_acc: 0.5000
Epoch 459/500
643/643 [==============================] - 0s 312us/step - loss: 0.3282 - acc: 0.9456 - val_loss: 1.2705 - val_acc: 0.5000
Epoch 460/500
643/643 [==============================] - 0s 291us/step - loss: 0.3555 - acc: 0.9533 - val_loss: 1.2705 - val_acc: 0.5000
Epoch 461/500
643/643 [==============================] - 0s 266us/step - loss: 0.4492 - acc: 0.9409 - val_loss: 1.2705 - val_acc: 0.5000
Epoch 462/500
643/643 [==============================] - 0s 322us/step - loss: 0.3550 - acc: 0.9487 - val_loss: 1.2705 - val_acc: 0.5000
Epoch 463/500
643/643 [==============================] - 0s 285us/step - loss: 0.3510 - acc: 0.9487 - val_loss: 1.3512 - val_acc: 0.5000
Epoch 464/500
643/643 [==============================] - 0s 294us/step - loss: 0.3653 - acc: 0.9487 - val_loss: 1.2705 - val_acc: 0.5000
Epoch 465/500
643/643 [==============================] - 0s 254us/step - loss: 0.3112 - acc: 0.9518 - val_loss: 1.2705 - val_acc: 0.5000
Epoch 466/500
643/643 [==============================] - 0s 286us/step - loss: 0.4148 - acc: 0.9393 - val_loss: 1.2706 - val_acc: 0.5000
Epoch 467/500
643/643 [==============================] - 0s 287us/step - loss: 0.3999 - acc: 0.9502 - val_loss: 1.2707 - val_acc: 0.5000
Epoch 468/500
643/643 [==============================] - 0s 303us/step - loss: 0.4360 - acc: 0.9347 - val_loss: 1.2708 - val_acc: 0.5000
Epoch 469/500
643/643 [==============================] - 0s 273us/step - loss: 0.4376 - acc: 0.9409 - val_loss: 0.6367 - val_acc: 0.7500
Epoch 470/500
643/643 [==============================] - 0s 289us/step - loss: 0.4108 - acc: 0.9456 - val_loss: 0.6356 - val_acc: 0.7500
Epoch 471/500
643/643 [==============================] - 0s 267us/step - loss: 0.4126 - acc: 0.9393 - val_loss: 0.6357 - val_acc: 0.7500
Epoch 472/500
643/643 [==============================] - 0s 244us/step - loss: 0.4021 - acc: 0.9502 - val_loss: 0.6358 - val_acc: 0.7500
Epoch 473/500
643/643 [==============================] - 0s 261us/step - loss: 0.4873 - acc: 0.9331 - val_loss: 0.6356 - val_acc: 0.7500
Epoch 474/500
643/643 [==============================] - 0s 366us/step - loss: 0.5084 - acc: 0.9331 - val_loss: 0.6356 - val_acc: 0.7500
Epoch 475/500
643/643 [==============================] - 0s 362us/step - loss: 0.3505 - acc: 0.9518 - val_loss: 0.6356 - val_acc: 0.7500
Epoch 476/500
643/643 [==============================] - 0s 399us/step - loss: 0.4252 - acc: 0.9440 - val_loss: 0.6356 - val_acc: 0.7500
Epoch 477/500
643/643 [==============================] - 0s 380us/step - loss: 0.4774 - acc: 0.9362 - val_loss: 0.6353 - val_acc: 0.7500
Epoch 478/500
643/643 [==============================] - 0s 346us/step - loss: 0.4256 - acc: 0.9425 - val_loss: 0.6353 - val_acc: 0.7500
Epoch 479/500
643/643 [==============================] - 0s 492us/step - loss: 0.4718 - acc: 0.9409 - val_loss: 0.6353 - val_acc: 0.7500
Epoch 480/500
643/643 [==============================] - 0s 300us/step - loss: 0.4632 - acc: 0.9378 - val_loss: 0.6352 - val_acc: 0.7500
Epoch 481/500
643/643 [==============================] - 0s 287us/step - loss: 0.4454 - acc: 0.9347 - val_loss: 0.6352 - val_acc: 0.7500
Epoch 482/500
643/643 [==============================] - 0s 331us/step - loss: 0.4154 - acc: 0.9440 - val_loss: 0.6352 - val_acc: 0.7500
Epoch 483/500
643/643 [==============================] - 0s 287us/step - loss: 0.4231 - acc: 0.9425 - val_loss: 0.6352 - val_acc: 0.7500
Epoch 484/500
643/643 [==============================] - 0s 272us/step - loss: 0.3991 - acc: 0.9471 - val_loss: 0.6352 - val_acc: 0.7500
Epoch 485/500
643/643 [==============================] - 0s 271us/step - loss: 0.5297 - acc: 0.9331 - val_loss: 0.6352 - val_acc: 0.7500
Epoch 486/500
643/643 [==============================] - 0s 263us/step - loss: 0.4177 - acc: 0.9393 - val_loss: 0.6352 - val_acc: 0.7500
Epoch 487/500
643/643 [==============================] - 0s 313us/step - loss: 0.4813 - acc: 0.9347 - val_loss: 0.6352 - val_acc: 0.7500
Epoch 488/500
643/643 [==============================] - 0s 342us/step - loss: 0.6050 - acc: 0.9207 - val_loss: 0.6352 - val_acc: 0.7500
Epoch 489/500
643/643 [==============================] - 0s 295us/step - loss: 0.5257 - acc: 0.9347 - val_loss: 0.6352 - val_acc: 0.7500
Epoch 490/500
643/643 [==============================] - 0s 251us/step - loss: 0.4116 - acc: 0.9456 - val_loss: 0.6352 - val_acc: 0.7500
Epoch 491/500
643/643 [==============================] - 0s 262us/step - loss: 0.4434 - acc: 0.9378 - val_loss: 0.6352 - val_acc: 0.7500
Epoch 492/500
643/643 [==============================] - 0s 275us/step - loss: 0.4881 - acc: 0.9316 - val_loss: 0.6352 - val_acc: 0.7500
Epoch 493/500
643/643 [==============================] - 0s 331us/step - loss: 0.4286 - acc: 0.9409 - val_loss: 0.6352 - val_acc: 0.7500
Epoch 494/500
643/643 [==============================] - 0s 275us/step - loss: 0.5091 - acc: 0.9316 - val_loss: 0.6352 - val_acc: 0.7500
Epoch 495/500
643/643 [==============================] - 0s 348us/step - loss: 0.4435 - acc: 0.9378 - val_loss: 0.6351 - val_acc: 0.7500
Epoch 496/500
643/643 [==============================] - 0s 353us/step - loss: 0.4402 - acc: 0.9378 - val_loss: 0.6351 - val_acc: 0.7500
Epoch 497/500
643/643 [==============================] - 0s 269us/step - loss: 0.4331 - acc: 0.9409 - val_loss: 0.6351 - val_acc: 0.7500
Epoch 498/500
643/643 [==============================] - 0s 306us/step - loss: 0.5082 - acc: 0.9362 - val_loss: 0.6351 - val_acc: 0.7500
Epoch 499/500
643/643 [==============================] - 0s 270us/step - loss: 0.4496 - acc: 0.9362 - val_loss: 0.6351 - val_acc: 0.7500
Epoch 500/500
643/643 [==============================] - 0s 290us/step - loss: 0.4714 - acc: 0.9378 - val_loss: 0.6351 - val_acc: 0.7500
(.venv) pi@raspi4B:~/mycroft-precise $ 

This is an waveform example what the wake word samples looks like:
picture
I am the dumb guy so pls help :wink:

Ok did you follow this, this was my starting point:


I also wrote something about the process in the rhasspy forum a few weeks ago:

I found the biggest influence was to have lots of different random noise to do incremental training against. Its also important to duplicate your wake word training data with added background noise to train a background noise resistant model.
Precise includes a tool for that.
I also played with adding whitenoise to some of the data and doing some duplicates which shifted pitch. While it does improve the duplicate set with added noise is more important.
I found all the youtube videos like one hour of random household or bar noises to be a great source for random audio to train against. Just download the audio with youtube-dl and convert it to the right format with sox.

To sum my findings up my workflow right now is:

  • 50 plus wake word samples
    • include all genders / ages that will be using the wake word (more = better)
    • record with the microphone you will be using the model with in a quiet enviroment
    • duplicate the training set and add background noise to the duplicates to train a noise resistant model or record more samples but with background noises like washing machine / tv / kitchen noises
      • i found this to be very important for a robust model
  • at least 15 plus hours of random audio (that goes in the data random folder) that does not include the wake word split into 10 minute clips as longer clips can lead to out of memory issues on a pi, this random audio should also cover audio from your household like tv and so on
    • good sources are:
      • google voice command data set
      • recordings from the environment the wake word will be use in
      • the audio of youtube videos like one hour of household noises, or coffee shop / bar noises
  • about 10-20% additional wakeword samples for valdation
  • initial training 100 epochs
  • afterwards incremental training 50 epochs per 10 false positives and a sensitivity between 0.5 and 0.7
    • this can take like half a day on a pi 4
  • once finished convert to pb format
  • copy everything from the test folders (test-wake-word / test-not-wake-word-generated) over to the training folders (wake-word / not-wake-word-generated) and do another simple training with 100 epochs
  • test which model works better
  • if something goes wrong / you want to train on more data than start again from the beginning as this will give you much better results

I hope this gives you some pointers, Johannes

1 Like

This topic was automatically closed after 60 days. New replies are no longer allowed.