A workable kludge for doing audio out while using Dashboard 2

Hello,

This is not a question so much as me explaining my workaround for anyone with this specific use case.

In Dashboard 1, I used the audio-ui thingy to give voice notifications in the lounge, and next to my bed, eg:

  • I'm shutting down pump #2 because it's sucking air
  • Irrigation reservoir is at XX capacity.
  • Battery bank at XX%

And so on. I will now also be using it for security prompts because thieving bastards etc etc.

Dashboard 2 doesn't have a dedicated audio-ui node yet, which I discovered after rebuilding my UI (absolutely needed to, don't regret it at all). I suspect that cleverer minds could find a way to do it with a template-ui, but I am not yet that ninja.

My kludge solution is to use the Dashboard 1 node, open in a second browser tab that's not visible. Lets me fullscreen dashboard 2, but still hear the audio from dashboard 1.

It works great, although it does require having both dashboards installed, which I'm sure will be no end of trouble at some point.

Hope someone else finds this helpful as a last resort.

Mods feel free to mark this solved, it's not an issue as such, I just couldn't come up with a better way to do it yet.

Thanks for the post - we do have Widget: Audio · Issue #52 · FlowFuse/node-red-dashboard · GitHub to track this, but hadn't had much interest/demand, so it was fairly low on the list of priorities at the moment.

Hi @joepavitt,

I'm an avid user - very helpful node indeed.

My kludge still works, so I'd say put your efforts elsewhere for now. I'd help if I could, but my coding is rudimentary at best. I words quite good if you need documentation help, though.

1 Like

Less of a kludge, I use "Espeak" speech synthesizer launched from an exec node. I wanted my audio alerts to not depend on a browser window being open.

1 Like

I like your solution because I can use it on a little rpi next to my bed. Elegant.

Server lives in the pantry where no one minds it sounding like it's trying to take off.

I have been using the same kludge keeping both dashboards open for a while now.
I am interested in the other workaround as well.

This sounds interesting.

Could you elaborate on how to do this?

I.e. What is espeak?
Does it need a separate installation etc.?
Does the sound come out of the device where the dashboard is being displayed or out of the host machine running the NodeRed server?

Espeak is a Linux command line app that does text to voice. So it runs on the server side , not the browser. Usually installed using apt

Yes, and as no_mpimpi has mentioned, you can run it on a dedicated Rasperberry Pi, located where you need to hear the messages and trigger the alerts via MQTT messages from anywhere else on your network.

My wife complains about its sorta hybrid Chinese-Indian accent but it works very well. I've not played around much with its "voices", I'm using the default.

Actually what you want is espeak-ng (next generation) which some (most) distributions package up as simply espeak.

https://en.wikipedia.org/wiki/ESpeak

It runs on the machine running node-red via and exec node, here is a simple flow I run on a RasperryPi2:

[
    {
        "id": "d804870f.124488",
        "type": "change",
        "z": "7fdf004e.28b08",
        "name": "unspecified",
        "rules": [
            {
                "t": "set",
                "p": "payload",
                "pt": "msg",
                "to": "\"'Person Detected on New Camera'\"",
                "tot": "str"
            },
            {
                "t": "set",
                "p": "topic",
                "pt": "msg",
                "to": "Espeak",
                "tot": "str"
            }
        ],
        "action": "",
        "property": "",
        "from": "",
        "to": "",
        "reg": false,
        "x": 1150,
        "y": 445,
        "wires": [
            [
                "1bda63c8.b77b4c"
            ]
        ]
    },
    {
        "id": "1bda63c8.b77b4c",
        "type": "delay",
        "z": "7fdf004e.28b08",
        "name": "",
        "pauseType": "rate",
        "timeout": "5",
        "timeoutUnits": "seconds",
        "rate": "1",
        "nbRateUnits": "4",
        "rateUnits": "second",
        "randomFirst": "1",
        "randomLast": "5",
        "randomUnits": "seconds",
        "drop": true,
        "outputs": 1,
        "x": 1340,
        "y": 285,
        "wires": [
            [
                "ea43c065.505fb"
            ]
        ]
    },
    {
        "id": "91ec5895.5a45d8",
        "type": "mqtt in",
        "z": "7fdf004e.28b08",
        "name": "espeak",
        "topic": "Espeak",
        "qos": "2",
        "broker": "313c03c1.be782c",
        "inputs": 0,
        "x": 1325,
        "y": 215,
        "wires": [
            [
                "79a966b3.dee288",
                "ea43c065.505fb"
            ]
        ]
    },
    {
        "id": "b1e96bf1.90dd28",
        "type": "inject",
        "z": "7fdf004e.28b08",
        "name": "speak test",
        "repeat": "",
        "crontab": "",
        "once": false,
        "onceDelay": 0.1,
        "topic": "Espeak",
        "payload": "'\"Multi word test\"'",
        "payloadType": "str",
        "x": 1345,
        "y": 165,
        "wires": [
            [
                "ea43c065.505fb",
                "79a966b3.dee288"
            ]
        ]
    },
    {
        "id": "79a966b3.dee288",
        "type": "exec",
        "z": "7fdf004e.28b08",
        "command": "/usr/bin/espeak",
        "addpay": true,
        "append": "",
        "useSpawn": "false",
        "timer": "",
        "oldrc": true,
        "name": "Espeak",
        "x": 1550,
        "y": 285,
        "wires": [
            [
                "7bd6ee8b.f3e1a"
            ],
            [
                "7bd6ee8b.f3e1a"
            ],
            [
                "7bd6ee8b.f3e1a"
            ]
        ]
    },
    {
        "id": "ea43c065.505fb",
        "type": "debug",
        "z": "7fdf004e.28b08",
        "name": "",
        "active": true,
        "tosidebar": true,
        "console": false,
        "tostatus": false,
        "complete": "true",
        "x": 1550,
        "y": 215,
        "wires": []
    },
    {
        "id": "7bd6ee8b.f3e1a",
        "type": "debug",
        "z": "7fdf004e.28b08",
        "name": "",
        "active": true,
        "tosidebar": true,
        "console": false,
        "tostatus": false,
        "complete": "true",
        "x": 1720,
        "y": 285,
        "wires": []
    },
    {
        "id": "313c03c1.be782c",
        "type": "mqtt-broker",
        "name": "localhost:1883",
        "broker": "localhost",
        "port": "1883",
        "clientid": "",
        "usetls": false,
        "compatmode": true,
        "keepalive": "60",
        "cleansession": true,
        "birthTopic": "",
        "birthQos": "0",
        "birthRetain": "false",
        "birthPayload": "",
        "closeTopic": "",
        "closePayload": "",
        "willTopic": "",
        "willQos": "0",
        "willRetain": "false",
        "willPayload": ""
    }
]

It can take messages from the local server, or from other systems via MQTT. I've used it both ways. You install it via apt. Once you have your Pi setup to play audio, you can do: sudo apt install espeak and then in a terminal window test it with: espeak "Hello World"

2 Likes

Thank you for sharing both the idea and the code.

Sorry this was to be a private message, for a different thread. Dumb me.

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.