[Announce] node-red-contrib-voice2json (beta)

Hello

Neither right now. If you want to stream live raw audio you would have to feed it to the voice2json record-command node which uses webrtc vad to determine when the command was finished speaking and than emits a single wav buffer with the detected speech. This is what the stt node expects, a single buffer containing wav audio or a path to a wav file.

The realtime factor i see on a pi 4 with kaldi is about 0.7 to 1 depending on the amount of background noise. So a bit over 2 seconds for 3 seconds of input audio.

I am working on live stream transcription with voice2json through nodered but there is a couple of factors why there is no node for that right now. There is a few bugs on the voice2json side and especially when used with a voice2json docker install there is a big speed caveat as the start up of the stream transcription and loading the libraries is enough to nearly negate any speed advantage to the current way. But it will be implemented as a node in the future and is on my internal road map as voice2json can in theory do it. Just not quite ready yet.

If you’d want to integrate voice2json into sepia probably the best way would be to just write your own python service or any other language really that interacts with it as this would save a lot of overhead.

Hope this helps, Johannes

Thanks for the info.

Sorry, but I didn't quite get the argument here. Are you saying the "wav buffer" is different from the raw audio stream (=another buffer)?

Thats pretty good! :slight_smile:

Somehow I was expecting this conclusion ^^ ... its just so much fun to simply connect nodes in Node-RED, especially since I've started to build several nodes for SEPIA as well :smiley:
The text-to-intent part is probably still something that could be integrated easily into SEPIA custom services via Node-RED, but thats a story for another time :wink:

I'll try to read a bit more about voice2json and its interfaces. It would be really great to enable users to share their custom voice models between the systems.

Yes a raw stream from a microphone is just buffers of pcm audio which have no information attached to them like length or encoding. So any program you pass it too wouldn’t know what to do with it. Wav data on the other side has riff headers embedded in it which will provide all that information. That’s why when you want to convert or play raw audio in any tool you actually will have to enter that information manually. The question is if sepia is streaming the raw data from the microphone or if it is actually streaming wav chunks that have headers?

I really recommend you read the white paper about the voice2json pipeline by mike the developer:

1 Like

@JGKK done in hurry:
tasker tutorial
Use it the way you want: edit it, copy it... whatever. if you want to update your voice2json documentation. no credits needed.

1 Like

Hi all,

Looks like very interesting add-on. I want to try it to send voice commands to my Home-automation server. I want to run it on separate RPi just for voice recognition+mic, installed in the best place from audio perspective.
I have one question: on ReSpeaker there is also 6-Mic array available - is it better, than 4 mic? Will it work with this project? I liked the Idea of flat cable, going to RPi - I need to have it a bit separated. BTW will it work with RPi 3 too?

Another question - are there any housings available for ReSpeaker?

Regards

I don’t have any experience with the 6 mic unfortunately. I have worked with the 4 mic pi hat, the 2 mic pi hat and the usb mic array v2.
The 2 mic is a good entry point for trying it out and development. The 4 mic pi hat is actually quite good performance and distance wise and very easy to work with.
In production I use the usb mic as it has by far the superior performance and far field capabilities but unfortunately also the highest price.
So overall think the 4 mic pi hat gives the best price versus performance balance and if I would start over I might do my whole setup based on it. Only thing to keep in mind is that the 4 mic hat doesn’t have a speaker output like some of the other respeaker mics.

There are several housing models that people designed available on thingiverse. I used some of those in the past and they were quite good. I got them p3d printed via one of the print on demand services like treatstock. Just search for Respeaker on Thingiverse.
There is a really nice one for a pi3 + 4 mic pi hat.

I will work with a pi 3 just fine but you will not get sub real time performance like on a pi 4. So for a 3 second speech command a pi 3 will take about 3 seconds to process it. You could also separate process into parts happening on different machines. This is what I do. I run the wake word and audio capture for the command on some raspberry pi 3s and than send the recorded command over mqtt to a central overclocked pi4 for speech to text and intent processing. This way I only needed one pi 4 and could use pi 3s i still had lying around for the audio capture.

Hope this helped, Johannes

1 Like

Excited to use this node you guys created. I am having a problem getting it going however. I followed the instructions and the result of the training process provides the following error:

Command failed: voice2json --profile /home/pi/Public/en-us_kaldi-zamia-2.0 train-profilengramcount: /lib/arm-linux-gnueabihf/libm.so.6: version GLIBC_2.27' not found (required by ngramcount)ngramcount: /lib/arm-linux-gnueabihf/libm.so.6: version GLIBC_2.27' not found (required by /usr/lib/voice2json/lib/libngram.so.134)[13296] Failed to execute script __main__Traceback (most recent call last): File "__main__.py", line 6, in <module> File "asyncio/runners.py", line 43, in run File "asyncio/base_events.py", line 587, in run_until_complete File "voice2json/__main__.py", line 73, in main File "voice2json/__main__.py", line 733, in train File "voice2json/core.py", line 65, in train_profile File "voice2json/train.py", line 299, in train_profile File "rhasspyasr_kaldi/train.py", line 92, in train File "rhasspynlu/arpa_lm.py", line 55, in graph_to_arpa File "rhasspynlu/arpa_lm.py", line 70, in fst_to_arpa File "rhasspynlu/arpa_lm.py", line 345, in run_task File "s...

$PATH includes home/pi/Public, which is where the en-us_kaldi-zamia-2.0 files are located. Any suggestions?

Hello,
Thats a voice2json error outside our nodes.
Are you using the deb package or the docker image? Did you see any errors while installing if it was the deb package?
Which Operating System are you on? When I googled a little bit about GLIB 2.27 this seems to be a problem with some debian based distributions like older ubuntu versions that have an older GLIB version.
So its probably inherent to your operating system and not voice2json.
If you installed using the deb package you could try to uninstall and use the docker install instead which should be agnostic to the lib versions of the host system.
If that doesnt work please open an issue with Mike the voice2json developer directly on the voice2json github:

Johannes

Thanks Johannes,
I am using Deb package and saw no errors when installing the pre-compiled packages . O/S is Raspbian GNU/Linux 9.11 (stretch). CPU is armhf. I really prefer to avoid using Docker at his point (something else to learn). I will do an O/S update / upgrade to ensure I have the latest.
st

EDIT: Aha, I see there is now a v10, "Buster". We'll try that.

1 Like

Yes I think they moved to Buster last year. Please let me know because that should be added to the requirements by me than :+1:t2:
Although the docker install of voice2json is fortunately quite painless if you follow there instructions.

Version 0.6.0

  • number of small fixes
  • added a transcribe-stream node (@sepia-assistant):
    • This is in principle a wrapper around the voice2josn transcribe-stream functionality that was introduced in recent versions and is a combination of record-command and stt
    • The transcription is started as soon as raw audio starts arriving. Due to the fact that the transcription happens while the input is still running this gives even on hardware like a raspberry pi nearly instantaneous results.
    • for this too work you need either latest deb package or docker container as there where some bugs in the previous versions of voice2josn that will prevent this node from working.
    • The node will only work with raw audio from a microphone same as the record command node. (So the Stt node is still the way to go if you receive your audio from any other source like for example a phone)
    • the usage is described in the info tab of the node but its not in the readme in the repository yet.
  • The suite of nodes now has proper version numbers in the package json so i don’t know if an update straight from the repository will work as the version now is effectively below the one that you have installed if you installed previously. So if it doesn’t show 0.6.0 after an npm update uninstall the node and reinstall it.

As always I look forward to your feedback and I will try to give the readme some love next week but unfortunately I have been a bit short on time the past weeks.

Johannes

3 Likes

Thank you for the update. I still have no microphone array but some time to get back to voice2json :slight_smile:
The recognition of my voice is not so good. I still have not trained because I still have no microphone for my raspi (Tasker only). Here comes the quesions: Which microphone can you recommend ? I thing I need an usb microphone since I googled the "respeaker 2 mic pi hats" (you recommended before) is mounted / connected by the GPIO what I think will be a problem with my cooling solution. The other question: What if my gf will try speech recognition when trained to my voice ? Will this work or will I get thrown something to my head after the recognition fails always ?

EDIT: Another thing. I am to dumb for getting how the training works. I start it with: msg.payload = "train"
well... and then ? Normally I know similar software that expects me to read given text and wants voice sample with that text. - hmmmpf, but this one seems to work elsehow.

Hi,

sounds great, especially the fact that it starts transcribing right away :sunglasses:
Are there transient results as well? ^^
I guess this does not require the wav-header right? (I have both available, just in case).

I think for SEPIA I should still access the voice2json endpoint directly though ... but maybe I could offer a SEPIA node that just streams the audio buffer from any SEPIA client and then that could be connected to the v2j node (for whatever ideas that come up) :slight_smile:

When you talk about similar software I expect you mean something like Dragon NaturallySpeaking. This is not how voice2json works. Most modern speech recognition systems like Kaldi or deepspeech work by training a both an acoustic model as well as a statistical language model on a large corpora on audio plus the transcription of said audio utilizing corporas like the one by the mozilla commonvoice project or similar.
When you train voice2json only the statistical language model part gets trained on your sentences and slots. The acoustic model doesn’t get touched and shouldn’t need to so apart from your sentences that you defined no further input is needed.
For more details and also some limitations of the approach of projects like voice2json please do read the section about how the transcription works in the documentation of the nodes and for much more detail the linked white paper by Mike about the voice2json workflow.
So the training is in principle speaker independent but you results may very depending on the model used as it always depends on how well all genders, ages and accents were represented in the copora used to train the acoustic model.

For Usb microphones it really depends on your budget. You can achieve some good results with some of the cheap usb conference microphones but the often haven’t got the best signal to noise ratio and far field capabilities.
Than there is the respeaker mic array v2 / usb mic array which gives far superior results and has build in leds that can be controlled with some simple python scripts but it more expensive than a pi 4 by itself.

Johannes

1 Like

No only once a finished command was determined.

You would have to build your own endpoint service as voice2json by itself really only offers all the separate commandline tools to bootstrap a application and will always need the user to build the connecting infra structure around themselves. Which is one of the reasons i started the work on the nodes to make this easily doable with nodered.

Why don’t you offer a websocket or an mqtt topic to subscribe to to receive the audio like rhasspy or snips used to do? Very easy to connect from something like nodered and no extra node needed?

Johannes

I've seen you node code and I think I could adapt the SEPIA STT server to use those terminal commands instead of the Kaldi ones. Lets see :slight_smile:

Actually this is what the SEPIA STT server is using. To be more precise it sends the audio buffer via socket and waits for results on the same channel.

Well I'm not making much progress just trying to run the voice2json with my usb mic. It calls arecord, but arecord fails with mymic ( a Shure MV5) with the parameters is sends - in particular -c 1. If I remove that arecord records fine. I've opened an issue in the github page for voice2json and I'll see what synesthesiam has to say.

Hmm the 1 channel Mono Format is what the speech transcription systems like Kaldi use so Mike will not have much choice there. You can change the command it calls in the profile.yml i think. If sox works with your mic you could use that instead to be called by voice2json or use the stdin argument and pipe the audio using unix pipes directly.

PS:
Or use the voice2json nodes and the sox convert node to convert to the right format after recording if it doesn’t accept a mono setting straight away while recording :wink:

ok, I bought the "Respeaker 4 Mic Array" but not the V2.0. This one: https://respeaker.io/4_mic_array/
Recording with audacity works as "arecord -Dac108 -f S32_LE -r 16000 -c 4 a.wav" / "aplay a.wav" does. The loudness could be better is there a way to boost it ?
Now I am struggeling with the integration:
A custom wake word is what I want so I ended up withan error while "Source" installation here:
source
ERROR: Could not build wheels for scipy which use PEP 517 and cannot be installed directly
Log:

pi@raspi4B:~/mycroft-precise $ sudo ./setup.sh
Reading package lists... Done
Building dependency tree       
Reading state information... Done
curl is already the newest version (7.64.0-4+deb10u1).
cython is already the newest version (0.29.2-2).
libatlas-base-dev is already the newest version (3.10.3-8+rpi1).
libhdf5-dev is already the newest version (1.10.4+repack-10).
libopenblas-dev is already the newest version (0.3.5+ds-3+rpi1).
libpulse-dev is already the newest version (12.2-4+deb10u1).
portaudio19-dev is already the newest version (19.6.0-1).
python3-h5py is already the newest version (2.8.0-3).
python3-scipy is already the newest version (1.1.0-7).
swig is already the newest version (3.0.12-2).
python3-pip is already the newest version (18.1-5+rpt1).
0 upgraded, 0 newly installed, 0 to remove and 5 not upgraded.
Looking in indexes: https://pypi.org/simple, https://www.piwheels.org/simple
Obtaining file:///home/pi/mycroft-precise/runner
Requirement already satisfied: pyaudio in ./.venv/lib/python3.7/site-packages (from precise-runner==0.3.1) (0.2.11)
Installing collected packages: precise-runner
  Attempting uninstall: precise-runner
    Found existing installation: precise-runner 0.3.1
    Uninstalling precise-runner-0.3.1:
      Successfully uninstalled precise-runner-0.3.1
  Running setup.py develop for precise-runner
Successfully installed precise-runner
Looking in indexes: https://pypi.org/simple, https://www.piwheels.org/simple
Obtaining file:///home/pi/mycroft-precise
Collecting numpy==1.16
  Using cached https://www.piwheels.org/simple/numpy/numpy-1.16.0-cp37-cp37m-linux_armv7l.whl (7.4 MB)
Collecting tensorflow<1.14,>=1.13
  Using cached https://www.piwheels.org/simple/tensorflow/tensorflow-1.13.1-cp37-none-linux_armv7l.whl (93.2 MB)
Collecting sonopy
  Using cached https://www.piwheels.org/simple/sonopy/sonopy-0.1.2-py3-none-any.whl (2.9 kB)
Requirement already satisfied: pyaudio in ./.venv/lib/python3.7/site-packages (from mycroft-precise==0.3.0) (0.2.11)
Collecting keras<=2.1.5
  Using cached Keras-2.1.5-py2.py3-none-any.whl (334 kB)
Collecting h5py
  Using cached https://www.piwheels.org/simple/h5py/h5py-2.10.0-cp37-cp37m-linux_armv7l.whl (4.7 MB)
Collecting wavio
  Using cached wavio-0.0.4-py2.py3-none-any.whl (9.0 kB)
Collecting typing
  Using cached https://www.piwheels.org/simple/typing/typing-3.7.4.3-py3-none-any.whl (28 kB)
Collecting prettyparse>=1.1.0
  Using cached https://www.piwheels.org/simple/prettyparse/prettyparse-1.1.0-py3-none-any.whl (3.7 kB)
Requirement already satisfied: precise-runner in ./runner (from mycroft-precise==0.3.0) (0.3.1)
Collecting attrs
  Using cached attrs-19.3.0-py2.py3-none-any.whl (39 kB)
Collecting fitipy<1.0
  Using cached https://www.piwheels.org/simple/fitipy/fitipy-0.1.2-py3-none-any.whl (1.9 kB)
Collecting speechpy-fast
  Using cached https://www.piwheels.org/simple/speechpy-fast/speechpy_fast-2.4-py3-none-any.whl (8.8 kB)
Collecting pyache
  Using cached pyache-0.2.0-py3-none-any.whl (7.6 kB)
Collecting tensorflow-estimator<1.15.0rc0,>=1.14.0rc0
  Using cached tensorflow_estimator-1.14.0-py2.py3-none-any.whl (488 kB)
Collecting six>=1.10.0
  Using cached six-1.15.0-py2.py3-none-any.whl (10 kB)
Collecting tensorboard<1.14.0,>=1.13.0
  Using cached tensorboard-1.13.1-py3-none-any.whl (3.2 MB)
Collecting keras-applications>=1.0.8
  Using cached Keras_Applications-1.0.8-py3-none-any.whl (50 kB)
Collecting keras-preprocessing>=1.0.5
  Using cached Keras_Preprocessing-1.1.2-py2.py3-none-any.whl (42 kB)
Collecting termcolor>=1.1.0
  Using cached https://www.piwheels.org/simple/termcolor/termcolor-1.1.0-py3-none-any.whl (4.8 kB)
Collecting wrapt>=1.11.1
  Using cached https://www.piwheels.org/simple/wrapt/wrapt-1.12.1-cp37-cp37m-linux_armv7l.whl (68 kB)
Collecting gast>=0.2.0
  Using cached gast-0.4.0-py3-none-any.whl (9.8 kB)
Collecting google-pasta>=0.1.6
  Using cached google_pasta-0.2.0-py3-none-any.whl (57 kB)
Collecting astor>=0.6.0
  Using cached astor-0.8.1-py2.py3-none-any.whl (27 kB)
Collecting grpcio>=1.8.6
  Using cached https://www.piwheels.org/simple/grpcio/grpcio-1.31.0-cp37-cp37m-linux_armv7l.whl (29.8 MB)
Requirement already satisfied: wheel>=0.26 in ./.venv/lib/python3.7/site-packages (from tensorflow<1.14,>=1.13->mycroft-precise==0.3.0) (0.34.2)
Collecting absl-py>=0.7.0
  Using cached https://www.piwheels.org/simple/absl-py/absl_py-0.9.0-py3-none-any.whl (121 kB)
Collecting protobuf>=3.6.1
  Using cached protobuf-3.12.4-py2.py3-none-any.whl (443 kB)
Collecting scipy
  Using cached scipy-1.5.2.tar.gz (25.4 MB)
  Installing build dependencies ... done
  Getting requirements to build wheel ... done
    Preparing wheel metadata ... done
Collecting pyyaml
  Using cached https://www.piwheels.org/simple/pyyaml/PyYAML-5.3.1-cp37-cp37m-linux_armv7l.whl (44 kB)
Collecting markdown>=2.6.8
  Using cached Markdown-3.2.2-py3-none-any.whl (88 kB)
Collecting werkzeug>=0.11.15
  Using cached Werkzeug-1.0.1-py2.py3-none-any.whl (298 kB)
Requirement already satisfied: setuptools in ./.venv/lib/python3.7/site-packages (from protobuf>=3.6.1->tensorflow<1.14,>=1.13->mycroft-precise==0.3.0) (49.3.1)
Collecting importlib-metadata; python_version < "3.8"
  Using cached importlib_metadata-1.7.0-py2.py3-none-any.whl (31 kB)
Collecting zipp>=0.5
  Using cached zipp-3.1.0-py3-none-any.whl (4.9 kB)
Building wheels for collected packages: scipy
  Building wheel for scipy (PEP 517) ... error
  ERROR: Command errored out with exit status 1:
   command: /home/pi/mycroft-precise/.venv/bin/python /home/pi/mycroft-precise/.venv/lib/python3.7/site-packages/pip/_vendor/pep517/_in_process.py build_wheel /tmp/tmpznsbqwrx
       cwd: /tmp/pip-install-t8aasp1f/scipy
  Complete output (696 lines):
  lapack_opt_info:
  lapack_mkl_info:
  customize UnixCCompiler
    libraries mkl_rt not found in ['/home/pi/mycroft-precise/.venv/lib', '/usr/local/lib', '/usr/lib', '/usr/lib/arm-linux-gnueabihf']
    NOT AVAILABLE
  
  openblas_lapack_info:
  customize UnixCCompiler
  customize UnixCCompiler
  customize UnixCCompiler
  C compiler: arm-linux-gnueabihf-gcc -pthread -DNDEBUG -g -fwrapv -O2 -Wall -g -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC
  
  creating /tmp/tmp16otweue/tmp
  creating /tmp/tmp16otweue/tmp/tmp16otweue
  compile options: '-c'
  arm-linux-gnueabihf-gcc: /tmp/tmp16otweue/source.c
  arm-linux-gnueabihf-gcc -pthread /tmp/tmp16otweue/tmp/tmp16otweue/source.o -lopenblas -o /tmp/tmp16otweue/a.out
  customize UnixCCompiler
    FOUND:
      libraries = ['openblas', 'openblas']
      library_dirs = ['/usr/lib/arm-linux-gnueabihf']
      language = c
      define_macros = [('HAVE_CBLAS', None)]
  
    FOUND:
      libraries = ['openblas', 'openblas']
      library_dirs = ['/usr/lib/arm-linux-gnueabihf']
      language = c
      define_macros = [('HAVE_CBLAS', None)]
  
  blas_opt_info:
  blas_mkl_info:
  customize UnixCCompiler
    libraries mkl_rt not found in ['/home/pi/mycroft-precise/.venv/lib', '/usr/local/lib', '/usr/lib', '/usr/lib/arm-linux-gnueabihf']
    NOT AVAILABLE
  
  blis_info:
  customize UnixCCompiler
    libraries blis not found in ['/home/pi/mycroft-precise/.venv/lib', '/usr/local/lib', '/usr/lib', '/usr/lib/arm-linux-gnueabihf']
    NOT AVAILABLE
  
  openblas_info:
  customize UnixCCompiler
  customize UnixCCompiler
  customize UnixCCompiler
    FOUND:
      libraries = ['openblas', 'openblas']
      library_dirs = ['/usr/lib/arm-linux-gnueabihf']
      language = c
      define_macros = [('HAVE_CBLAS', None)]
  
    FOUND:
      libraries = ['openblas', 'openblas']
      library_dirs = ['/usr/lib/arm-linux-gnueabihf']
      language = c
      define_macros = [('HAVE_CBLAS', None)]
  
  [makenpz] scipy/special/tests/data/boost.npz not rebuilt
  [makenpz] scipy/special/tests/data/gsl.npz not rebuilt
  [makenpz] scipy/special/tests/data/local.npz not rebuilt
  non-existing path in 'scipy/signal/windows': 'tests'
  running bdist_wheel
  running build
  running config_cc
  unifing config_cc, config, build_clib, build_ext, build commands --compiler options
  running config_fc
  unifing config_fc, config, build_clib, build_ext, build commands --fcompiler options
  running build_src
  build_src
  building py_modules sources
  building library "mach" sources
  building library "quadpack" sources
  building library "lsoda" sources
  building library "vode" sources
  building library "dop" sources
  building library "fitpack" sources
  building library "fwrappers" sources
  building library "odrpack" sources
  building library "minpack" sources
  building library "rectangular_lsap" sources
  building library "rootfind" sources
  building library "superlu_src" sources
  building library "arpack_scipy" sources
  building library "sc_cephes" sources
  building library "sc_mach" sources
  building library "sc_amos" sources
  building library "sc_cdf" sources
  building library "sc_specfun" sources
  building library "statlib" sources
  building extension "scipy.cluster._vq" sources
  building extension "scipy.cluster._hierarchy" sources
  building extension "scipy.cluster._optimal_leaf_ordering" sources
  building extension "scipy.fft._pocketfft.pypocketfft" sources
  building extension "scipy.fftpack.convolve" sources
  building extension "scipy.integrate._quadpack" sources
  building extension "scipy.integrate._odepack" sources
  building extension "scipy.integrate.vode" sources
  f2py options: []
    adding 'build/src.linux-armv7l-3.7/build/src.linux-armv7l-3.7/scipy/integrate/fortranobject.c' to sources.
    adding 'build/src.linux-armv7l-3.7/build/src.linux-armv7l-3.7/scipy/integrate' to include_dirs.
    adding 'build/src.linux-armv7l-3.7/scipy/integrate/vode-f2pywrappers.f' to sources.
  building extension "scipy.integrate.lsoda" sources
  f2py options: []
    adding 'build/src.linux-armv7l-3.7/build/src.linux-armv7l-3.7/scipy/integrate/fortranobject.c' to sources.
    adding 'build/src.linux-armv7l-3.7/build/src.linux-armv7l-3.7/scipy/integrate' to include_dirs.
    adding 'build/src.linux-armv7l-3.7/scipy/integrate/lsoda-f2pywrappers.f' to sources.
  building extension "scipy.integrate._dop" sources
  f2py options: []
    adding 'build/src.linux-armv7l-3.7/build/src.linux-armv7l-3.7/scipy/integrate/fortranobject.c' to sources.
    adding 'build/src.linux-armv7l-3.7/build/src.linux-armv7l-3.7/scipy/integrate' to include_dirs.
    adding 'build/src.linux-armv7l-3.7/scipy/integrate/_dop-f2pywrappers.f' to sources.
  building extension "scipy.integrate._test_multivariate" sources
  building extension "scipy.integrate._test_odeint_banded" sources
  f2py options: []
    adding 'build/src.linux-armv7l-3.7/build/src.linux-armv7l-3.7/scipy/integrate/fortranobject.c' to sources.
    adding 'build/src.linux-armv7l-3.7/build/src.linux-armv7l-3.7/scipy/integrate' to include_dirs.
    adding 'build/src.linux-armv7l-3.7/scipy/integrate/_test_odeint_banded-f2pywrappers.f' to sources.
  building extension "scipy.interpolate.interpnd" sources
  building extension "scipy.interpolate._ppoly" sources
  building extension "scipy.interpolate._bspl" sources
  building extension "scipy.interpolate._fitpack" sources
  building extension "scipy.interpolate.dfitpack" sources
  f2py options: []
    adding 'build/src.linux-armv7l-3.7/build/src.linux-armv7l-3.7/scipy/interpolate/src/fortranobject.c' to sources.
    adding 'build/src.linux-armv7l-3.7/build/src.linux-armv7l-3.7/scipy/interpolate/src' to include_dirs.
    adding 'build/src.linux-armv7l-3.7/scipy/interpolate/src/dfitpack-f2pywrappers.f' to sources.
  building extension "scipy.io._test_fortran" sources
  f2py options: []
    adding 'build/src.linux-armv7l-3.7/build/src.linux-armv7l-3.7/scipy/io/fortranobject.c' to sources.
    adding 'build/src.linux-armv7l-3.7/build/src.linux-armv7l-3.7/scipy/io' to include_dirs.
  building extension "scipy.io.matlab.streams" sources
  building extension "scipy.io.matlab.mio_utils" sources
  building extension "scipy.io.matlab.mio5_utils" sources
  building extension "scipy.linalg._fblas" sources
  f2py options: []
    adding 'build/src.linux-armv7l-3.7/build/src.linux-armv7l-3.7/build/src.linux-armv7l-3.7/scipy/linalg/fortranobject.c' to sources.
    adding 'build/src.linux-armv7l-3.7/build/src.linux-armv7l-3.7/build/src.linux-armv7l-3.7/scipy/linalg' to include_dirs.
    adding 'build/src.linux-armv7l-3.7/build/src.linux-armv7l-3.7/scipy/linalg/_fblas-f2pywrappers.f' to sources.
  building extension "scipy.linalg._flapack" sources
  f2py options: []
    adding 'build/src.linux-armv7l-3.7/build/src.linux-armv7l-3.7/build/src.linux-armv7l-3.7/scipy/linalg/fortranobject.c' to sources.
    adding 'build/src.linux-armv7l-3.7/build/src.linux-armv7l-3.7/build/src.linux-armv7l-3.7/scipy/linalg' to include_dirs.
    adding 'build/src.linux-armv7l-3.7/build/src.linux-armv7l-3.7/scipy/linalg/_flapack-f2pywrappers.f' to sources.
  building extension "scipy.linalg._flinalg" sources
  f2py options: []
    adding 'build/src.linux-armv7l-3.7/build/src.linux-armv7l-3.7/scipy/linalg/fortranobject.c' to sources.
    adding 'build/src.linux-armv7l-3.7/build/src.linux-armv7l-3.7/scipy/linalg' to include_dirs.
  building extension "scipy.linalg._interpolative" sources
  f2py options: []
    adding 'build/src.linux-armv7l-3.7/build/src.linux-armv7l-3.7/scipy/linalg/fortranobject.c' to sources.
    adding 'build/src.linux-armv7l-3.7/build/src.linux-armv7l-3.7/scipy/linalg' to include_dirs.
  building extension "scipy.linalg._solve_toeplitz" sources
  building extension "scipy.linalg.cython_blas" sources
  building extension "scipy.linalg.cython_lapack" sources
  building extension "scipy.linalg._decomp_update" sources
  building extension "scipy.odr.__odrpack" sources
  building extension "scipy.optimize._minpack" sources
  building extension "scipy.optimize._lsap_module" sources
  building extension "scipy.optimize._zeros" sources
  building extension "scipy.optimize._lbfgsb" sources
  f2py options: []
    adding 'build/src.linux-armv7l-3.7/build/src.linux-armv7l-3.7/scipy/optimize/lbfgsb_src/fortranobject.c' to sources.
    adding 'build/src.linux-armv7l-3.7/build/src.linux-armv7l-3.7/scipy/optimize/lbfgsb_src' to include_dirs.
    adding 'build/src.linux-armv7l-3.7/scipy/optimize/lbfgsb_src/_lbfgsb-f2pywrappers.f' to sources.
  building extension "scipy.optimize.moduleTNC" sources
  building extension "scipy.optimize._cobyla" sources
  f2py options: []
    adding 'build/src.linux-armv7l-3.7/build/src.linux-armv7l-3.7/scipy/optimize/cobyla/fortranobject.c' to sources.
    adding 'build/src.linux-armv7l-3.7/build/src.linux-armv7l-3.7/scipy/optimize/cobyla' to include_dirs.
  building extension "scipy.optimize.minpack2" sources
  f2py options: []
    adding 'build/src.linux-armv7l-3.7/build/src.linux-armv7l-3.7/scipy/optimize/minpack2/fortranobject.c' to sources.
    adding 'build/src.linux-armv7l-3.7/build/src.linux-armv7l-3.7/scipy/optimize/minpack2' to include_dirs.
  building extension "scipy.optimize._slsqp" sources
  f2py options: []
    adding 'build/src.linux-armv7l-3.7/build/src.linux-armv7l-3.7/scipy/optimize/slsqp/fortranobject.c' to sources.
    adding 'build/src.linux-armv7l-3.7/build/src.linux-armv7l-3.7/scipy/optimize/slsqp' to include_dirs.
  building extension "scipy.optimize.__nnls" sources
  f2py options: []
    adding 'build/src.linux-armv7l-3.7/build/src.linux-armv7l-3.7/scipy/optimize/__nnls/fortranobject.c' to sources.
    adding 'build/src.linux-armv7l-3.7/build/src.linux-armv7l-3.7/scipy/optimize/__nnls' to include_dirs.
  building extension "scipy.optimize._group_columns" sources
  building extension "scipy.optimize._bglu_dense" sources
  building extension "scipy.optimize._lsq.givens_elimination" sources
  building extension "scipy.optimize._trlib._trlib" sources
  building extension "scipy.optimize.cython_optimize._zeros" sources
  building extension "scipy.signal.sigtools" sources
  building extension "scipy.signal._spectral" sources
  building extension "scipy.signal._max_len_seq_inner" sources
  building extension "scipy.signal._peak_finding_utils" sources
  building extension "scipy.signal._sosfilt" sources
  building extension "scipy.signal._upfirdn_apply" sources
  building extension "scipy.signal.spline" sources
  building extension "scipy.sparse.linalg.isolve._iterative" sources
  f2py options: []
    adding 'build/src.linux-armv7l-3.7/build/src.linux-armv7l-3.7/build/src.linux-armv7l-3.7/scipy/sparse/linalg/isolve/iterative/fortranobject.c' to sources.
    adding 'build/src.linux-armv7l-3.7/build/src.linux-armv7l-3.7/build/src.linux-armv7l-3.7/scipy/sparse/linalg/isolve/iterative' to include_dirs.
  building extension "scipy.sparse.linalg.dsolve._superlu" sources
  building extension "scipy.sparse.linalg.eigen.arpack._arpack" sources
  f2py options: []
    adding 'build/src.linux-armv7l-3.7/build/src.linux-armv7l-3.7/build/src.linux-armv7l-3.7/scipy/sparse/linalg/eigen/arpack/fortranobject.c' to sources.
    adding 'build/src.linux-armv7l-3.7/build/src.linux-armv7l-3.7/build/src.linux-armv7l-3.7/scipy/sparse/linalg/eigen/arpack' to include_dirs.
    adding 'build/src.linux-armv7l-3.7/build/src.linux-armv7l-3.7/scipy/sparse/linalg/eigen/arpack/_arpack-f2pywrappers.f' to sources.
  building extension "scipy.sparse.csgraph._shortest_path" sources
  building extension "scipy.sparse.csgraph._traversal" sources
  building extension "scipy.sparse.csgraph._min_spanning_tree" sources
  building extension "scipy.sparse.csgraph._matching" sources
  building extension "scipy.sparse.csgraph._flow" sources
  building extension "scipy.sparse.csgraph._reordering" sources
  building extension "scipy.sparse.csgraph._tools" sources
  building extension "scipy.sparse._csparsetools" sources
  building extension "scipy.sparse._sparsetools" sources
  [generate_sparsetools] 'scipy/sparse/sparsetools/bsr_impl.h' already up-to-date
  [generate_sparsetools] 'scipy/sparse/sparsetools/csr_impl.h' already up-to-date
  [generate_sparsetools] 'scipy/sparse/sparsetools/csc_impl.h' already up-to-date
  [generate_sparsetools] 'scipy/sparse/sparsetools/other_impl.h' already up-to-date
  [generate_sparsetools] 'scipy/sparse/sparsetools/sparsetools_impl.h' already up-to-date
  building extension "scipy.spatial.qhull" sources
  building extension "scipy.spatial.ckdtree" sources
  building extension "scipy.spatial._distance_wrap" sources
  building extension "scipy.spatial._voronoi" sources
  building extension "scipy.spatial._hausdorff" sources
  building extension "scipy.special.specfun" sources
  f2py options: ['--no-wrap-functions']
    adding 'build/src.linux-armv7l-3.7/build/src.linux-armv7l-3.7/scipy/special/fortranobject.c' to sources.
    adding 'build/src.linux-armv7l-3.7/build/src.linux-armv7l-3.7/scipy/special' to include_dirs.
  building extension "scipy.special._ufuncs" sources
  building extension "scipy.special._ufuncs_cxx" sources
  building extension "scipy.special._ellip_harm_2" sources
  building extension "scipy.special.cython_special" sources
  building extension "scipy.special._comb" sources
  building extension "scipy.special._test_round" sources
  building extension "scipy.stats.statlib" sources
  f2py options: ['--no-wrap-functions']
    adding 'build/src.linux-armv7l-3.7/build/src.linux-armv7l-3.7/scipy/stats/fortranobject.c' to sources.
    adding 'build/src.linux-armv7l-3.7/build/src.linux-armv7l-3.7/scipy/stats' to include_dirs.
  building extension "scipy.stats._stats" sources
  building extension "scipy.stats.mvn" sources
  f2py options: []
    adding 'build/src.linux-armv7l-3.7/build/src.linux-armv7l-3.7/scipy/stats/fortranobject.c' to sources.
    adding 'build/src.linux-armv7l-3.7/build/src.linux-armv7l-3.7/scipy/stats' to include_dirs.
    adding 'build/src.linux-armv7l-3.7/scipy/stats/mvn-f2pywrappers.f' to sources.
  building extension "scipy.ndimage._nd_image" sources
  building extension "scipy.ndimage._ni_label" sources
  building extension "scipy.ndimage._ctest" sources
  building extension "scipy.ndimage._ctest_oldapi" sources
  building extension "scipy.ndimage._cytest" sources
  building extension "scipy._lib._ccallback_c" sources
  building extension "scipy._lib._test_ccallback" sources
  building extension "scipy._lib._fpumode" sources
  building extension "scipy._lib.messagestream" sources
  get_default_fcompiler: matching types: '['gnu95', 'intel', 'lahey', 'pg', 'absoft', 'nag', 'vast', 'compaq', 'intele', 'intelem', 'gnu', 'g95', 'pathf95', 'nagfor']'
  customize Gnu95FCompiler
  Could not locate executable gfortran
  Could not locate executable f95
  customize IntelFCompiler
  Could not locate executable ifort
  Could not locate executable ifc
  customize LaheyFCompiler
  Could not locate executable lf95
  customize PGroupFCompiler
  Could not locate executable pgfortran
  customize AbsoftFCompiler
  Could not locate executable f90
  Could not locate executable f77
  customize NAGFCompiler
  customize VastFCompiler
  customize CompaqFCompiler
  Could not locate executable fort
  customize IntelItaniumFCompiler
  Could not locate executable efort
  Could not locate executable efc
  customize IntelEM64TFCompiler
  customize GnuFCompiler
  Could not locate executable g77
  customize G95FCompiler
  Could not locate executable g95
  customize PathScaleFCompiler
  Could not locate executable pathf95
  customize NAGFORCompiler
  Could not locate executable nagfor
  don't know how to compile Fortran code on platform 'posix'
  C compiler: arm-linux-gnueabihf-gcc -pthread -DNDEBUG -g -fwrapv -O2 -Wall -g -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC
  
  compile options: '-I/home/pi/mycroft-precise/.venv/include -I/usr/include/python3.7m -c'
  arm-linux-gnueabihf-gcc: _configtest.c
  arm-linux-gnueabihf-gcc -pthread _configtest.o -o _configtest
  success!
  removing: _configtest.c _configtest.o _configtest
  building extension "scipy._lib._test_deprecation_call" sources
  building extension "scipy._lib._test_deprecation_def" sources
  building extension "scipy._lib._uarray._uarray" sources
  building data_files sources
  build_src: building npy-pkg config files
  running build_py
  creating build/lib.linux-armv7l-3.7
  creating build/lib.linux-armv7l-3.7/scipy
  copying scipy/__init__.py -> build/lib.linux-armv7l-3.7/scipy
  copying scipy/_distributor_init.py -> build/lib.linux-armv7l-3.7/scipy
  copying scipy/conftest.py -> build/lib.linux-armv7l-3.7/scipy
  copying scipy/version.py -> build/lib.linux-armv7l-3.7/scipy
  copying scipy/setup.py -> build/lib.linux-armv7l-3.7/scipy
  copying build/src.linux-armv7l-3.7/scipy/__config__.py -> build/lib.linux-armv7l-3.7/scipy
..................

..................
  customize UnixCCompiler
  customize UnixCCompiler using build_clib
  building 'mach' library
  Running from SciPy source directory.
  /tmp/pip-build-env-3zkfiuu3/overlay/lib/python3.7/site-packages/numpy/distutils/system_info.py:716: UserWarning: Specified path /tmp/pip-build-env-3zkfiuu3/overlay/include/python3.7m is invalid.
    return self.get_paths(self.section, key)
  /tmp/pip-build-env-3zkfiuu3/overlay/lib/python3.7/site-packages/numpy/distutils/system_info.py:716: UserWarning: Specified path /usr/local/include/python3.7m is invalid.
    return self.get_paths(self.section, key)
  /tmp/pip-build-env-3zkfiuu3/overlay/lib/python3.7/site-packages/numpy/distutils/system_info.py:716: UserWarning: Specified path /home/pi/mycroft-precise/.venv/include/python3.7m is invalid.
    return self.get_paths(self.section, key)
  error: library mach has Fortran sources but no Fortran compiler found
  ----------------------------------------
  ERROR: Failed building wheel for scipy
Failed to build scipy
ERROR: Could not build wheels for scipy which use PEP 517 and cannot be installed directly

I tried differend things found by google but none of them worked. @JGKK have you been able to setup/train a custom wake word (Raspi4)?

EDIT: Solved by editing the install script "setup.sh" at line:
"if [ ! -x "$VENV/bin/python" ]; then python3 -m venv "$VENV" --without-pip; fi"
to
"if [ ! -x "$VENV/bin/python" ]; then python -m venv "$VENV" --without-pip; fi"
(As a note to the ones who may find this by google)
what a mess with this python 2.x and 3.x they have done....

They are two different products. The V2 is a Usb mic and not a pi hat but both are good :+1:

If you use something like sox than there is a gain parameter you can set while recording but in my test with the 4 mic pi hat mic from respeaker it worked fine without any extra gain in combination with things like voice2json.

Yes i have actually trained multiple wakeword on a raspberry pi 4. I didn’t have any problems installing on buster. I pretty much just followed their installation steps for installation from source in the precise repository and that worked for me. But I agree I’m not the biggest fan of the way they went using a python vent and so on but i think this is all due to it beeing based on tensorflow.

Johannes

2 Likes