Choosing an architecture for Home Automation

Thought I'd start a conversation about choosing a good, sustainable architecture to run Node-RED alongside HomeAssistant. This is a popular combination specifically for Home Automation, and for good reason. There seem to be many different options and considerations, which I've started to list below, and I would appreciate your input.

My main focus is on being able to restore in the event of an outage. Specifically I am always minded of running my home on a system that would fall flat on its face if I suffered injury or death - subjects that don't usually come up in forums like this. See this excellent blog / video on the topic by Jonathan Oxer (Superhouse.tv).

TL;DR - skip to the list at the end!

Why HomeAssistant? Right now I have pretty much everything working in Node-RED from multiple light dimming and scene control in each room to electric blankets, multi-room audio, heating, CCTV, and touchscreen control panels. Whilst I've found Node-RED fantastic as a standalone home automation system, there are some weaknesses with NR on its own as an automation system. It's not targeted towards home automation. Node-RED is so flexible, and this is an asset. But HomeAssistant can do a lot of the "heavy lifting"; storing state, grouping items, automatically discovering items, and more. Secondly I've found some of the nodes are not perfect. In most cases, I have been extremely impressed by how quickly Node-RED has let me just play and get stuff working quickly - but not in all cases!

Let me illustrate the above 2 points. I use Squeezebox extensively for multi-room audio. The SB plugin for Node-RED has left me scratching my head for hours. At first it's simple to send control messages to players, but then it's very difficult to get track name and artist back. Then I did a test install of HomeAssistant. Not only did it automatically find my Squeezebox server on the network, but also it showed me immediately what was playing in each room, completely with a nice graphic of the cover art, and the ability to send text to speech to a given player built right into the UI. This would have taken more than 4-5 hours - and a lot of additional coding than what's already available - to get working in Node-RED.

So, onto my musings / questions about architecture.

  • Server architecture. I am personally not convinced about the Raspberry Pi. It's great for tinkering but for those who need something more sustainable and resilient, we need a server with RAID, dual power supplies, etc. Probably a virtual server that could be rebuilt in 10 or even 15 years, with backups and updates in the meantime. For me personally this means Hyper-V running either Linux, Win Server, or both. For me it means backing up on a VM level to a local NAS. I'm a Win Server guy, I socialise with similar people, and I feel most comfortable in that world. Someone told me to give up Hyper-V and use Proxmox. I tested this but wasn't too impressed. Would like to hear more thoughts. Any good ideas or suggestions here?

  • How to install HomeAssistant and Node-RED together. This is where the options are too many:

    • Node-RED and HomeAssistant both in the same Linux VM?
    • Or in different Linux VMs?
    • Node-RED and HomeAssistant in a Docker VM. Is this a way to create a templated installation? I don't really understand Docker yet.
    • Node-RED running as a "plugin" for HomeAssistant using "The NotoriousBDG addon". This is suggested by popular YouTubers like DrZzs in this vid. I haven't tested this, but I feel there may be some conflicts, and a third party plugin creates an unnecessary dependency.
    • What about installing Node-RED as per the documentation, Alternative install methods and HomeAssistant using its own documentation, then hooking them together with node-red-contrib-home-assistant? This seems like the most "native" way to do it. This obviously requires both to be set to run at startup.
    • Actually I opted for this slightly different install method (this youtube vid) - nice clear instructions also including configuring Node-RED as a system service with systemctl. However I found that when I install the node-red-contrib-home-assistant I get "Install failed, check the log file". When I tried to check the log file, I read that logs are only output to console. There are alternative methods but I couldn't really determine what these were from the docs.
    • Should HomeAssistant be run in a Python3 virtual environment, what are the benefits?
  • What about backing up? My default preference for backing anything and everything up is to take full VM backups of a machine from the Hyper-V server, using e.g. Windows Server Backup. I realise I'm probably not speaking the same language as many who read this! But it's incredibly stable and test restores work perfectly.

    • I read about Node-RED Projects, and I was specifically interested in the phrase on that page "create a redistributable Node-RED application". What exactly does this mean? It sounds very impressive to be able to essentially back your Node-RED up to GitHub - is that correct? If you could put yourself in the shoes of a user who may not be familiar with GitHub (I am familiar, but I'm thinking about compiling systems for others to maintain), is this a good sustainable plan? Is it the kind of thing that could be documented clearly for a non-developer to implement a restore?
    • What about @scargill's "Scargill's Node RED script"? Does this go some way to addressing some of the questions above?
    • As well as backing up the whole infrastructure (for server failure), what about really clean ways of backing up settings (for user error)? Previously I ran openHAB on Windows, and the entire openHAB directory was stored on Dropbox, sync'd to the server, and so any changes made to configuration files were continuously backed up to the cloud that way. This is my personal favourite way of working. Is there a way to implement this with some elegance or simplicity in Linux, and with Node-RED and HomeAssistant?

Finally I'm interested in all architecture decisions. For example, one suggestion might be "no, don't mess around with HomeAssistant, life will be more simple if you just use Node-RED on its own"!

Any other points I've missed on this topic?

I disagree with your premise that the Pi is not up to the task. I have several that have been running continuously for many, many months without intervention. I'm sure others here have had theirs running longer. If required you can also cluster them (for fail-over) if you see that as a necessity. Personally I don't.

Even if resiliency is seen as an issue with the Pi its advantages far outweigh this issue IMO. Their small footprint (physical, electrical and cost) and ease of use are just so much of a "no-brainer" when it comes to server ops in an IOT environment.

I couldn't imagine running a server/PC continuously just for home automation when an SBC will do the job with ease.

Cheers

1 Like

Good topic @hazimat, been going over same thinking the last 2 yrs, I do not have homeassistant, but also do not have such a complex home as yours.

I have been mulling over this same problem for 2yrs, and as result started using Scargill’s script, then started editing it..., heavily, and over time realised i needed a better approach to rebuild the platforms. (I also host Industrial apps using NR & MariaDB - all on Pi’s).

After more research I dropped Scargill’s adapted script for Ansible, and love it. I also use Docker, and NR Projects.

Btw - my Pi’s are brilliant, 4 stacked, my indust apps are running now for over 7 months without intervention, 7 sites, one MariaDB, all in seperate Docker instances, talking to one another on Docker network, protected by reverse proxy, also on Docker.

I document most of my work in link below, look at menu script that invokes ansible playbooks, the docker and others.

Btw, I am rebuilding my home automation, trying to use data configs of sensors to drive most of full system config, to make my setup more transportable, some damily members have asked for a setup, so that I can have maintenance free / light systems for them. I will publish that once I am done. I had a look at how HomeKit and AWS IoT platform works, with Shadow service etc, and bringing some of those concept into my version of a NR configurable system, where most config starts with the config of a things registry.

Hope this helps.

1 Like

Ohhh, regarding the Pi’s, I live in South Africa, many power failures of late, I did not have a UPS until some days ago on them, they recovered beautifully every single time, even the MariaDB instance.

Furthermore, what’s cool with Ansible configuring Docker using my Ansible menu system (to stick with the free version of Ansible), on the Pi’s, but using the NR Projects on bitbucket, is that I have adapted the Ansible/Docker same config to also install same images and code on the Docker running on my Mac, thus easy to do maint / new dev, even when away from the office.

Hey @Bobo, I think clustering would solve my main issue with the Pi - that on its own it has no failover / dual/triple redundancy. Perhaps I should go down this route. I have a few Pis so would be easy for me to test. I'm most interested in how @IoTPlay stacks the Pis, are they in a failover configuration? Also very interested in how @IoTPlay uses Ansible. Is it just for configuration management? I've felt that configuration management is the one piece of the puzzle that is currently missing for me, and had never heard of Ansible until today...

Yes, Andible to config everything on the Pi, I start with a fresh Pi - only raspian light (headless), and just press menu option for the full config on Ansible menu. Ansible also has this cool feature where it holds the passwords and keys in seperate vault yaml files that are encrypted, I was always worried how to handle these...

Just adding my $.02 here.

After going through initially a Vera-based system, I moved over to OpenHab and eventually discovered NodeRed. Once I migrated everything but Zwave off of OpenHab, I finally settled on HomeSeer for my Zwave since it had more robust ZWave then OpenHab (and OpenZWave didn't work with all my devices), MQTT as my communication bus (some of my other interfaces run off of MQTT vs. native to Node-Red) and Node-Red as my rules engine.

Like many here, my system is anchored on RaspberryPi's (one for HomeSeer and one for MQTT, Node-Red and some other system interfaces to MQTT). I've found them to be extremely stable though do have concern on the SD Cards eventually wearing out so have been thinking about implementing network booting so I can use my RAID NAS as my core system storage.

Having a low WAF in my house I do still run into some random issues (though I'm hoping trading out my ZWave stick last week will resolve some....). To your point if I wasn't here to maintain it long term not sure how that would play out (though I've avoided any devices that won't work on a standalone basis if the controller went permanently offline). If I was building around that I think I'd go commercial with the Wink2 or Samsung given the easy interfaces but I'm not willing to give up all my Node-Red integrations to go that route today.

1 Like

Nice topic.

I currently have a mixture of HomeSeer, Home Automation, and NR. All running on Pis.

I have also written written an interface in NR to my Nuvo Home speaker system that connects it to both a Sonos Connect as well as a Nuvo Player.

I have played with openHAB as well.

My current plan is to settle in on a combination of Home Automation and NR. HomeSeer works great but the UI stinks, costs too much, and looks like the 1980s. Too bad....

I have been running all of these on Pis and I have never had a single reliability issue. I DO back the Pis up every night to a local system that is connected with OneDrive.

I was hoping to start a collaboration effort a while back to create the ultimate NR HA system. Set up a server and everything, but then I had a change in my work environment and could not continue it. If anyone wishes to start up a similar project, I would be more than happy to participate.

Just some sharings...

Joe

While the Pi is a great and very reliable platform. It isn't totally reliable without some help. We know that the SD cards are prone to corruption, especially with power outages. A good power supply with UPS is absolutely necessary - even there my old Pi2 live setup can't cope with the UPS switching to battery and back for some weird reason - the network doesn't stay connected. I now have a reboot flow built so that if connection to the router is lost for 60s, the pi is rebooted.

I've tried running various things on my NAS but found that the NAS really does need to be dedicated to what it is designed for - storage and backup. Unless you have a powerful NAS, a large backup takes up all the memory and processor. So I keep everything off my Synology 412+.

Of course, all HA requires power so you need to make sure that all of the communications elements keep going if power is interrupted. Again, the Pi is generally great here (though see my odd problem above) since it requires very little power, even a small PC UPS is more than enough to keep it running for ages. But if you are running WiFi-based and/or Zigbee based kit, don't forget to add the controllers to your UPS along with any switches, routers and gateways needed to keep everything going.

As for setup, I tried Ansible and found it just too complex for it to be worthwhile setting up for a couple of Pi's. Generally, my Pi configurations are pretty simple anyway and all I need to do is use the NAS to backup both the Pi and Node-RED config's so they are easy to restore.

Same is true of using Node-RED projects. I find them too complex (and initially too unreliable - doubtless that's changed now) to be worth the effort for a live setup. I simply back up the appropriate files on a daily basis.

I also don't use the default Node-RED installation. Rather preferring a local install which lets me run multiple instances, even multiple versions, as needed. My configs will also run on any platform so moving is easy (though serial connected devices can be tricky between platforms).

Two Pi's is a great setup for me. It allows for failures to be recovered quickly and updates to be tested. Yet it doesn't break the bank.

I know that if I wasn't around, my family would quickly ditch the HA and go back to basics no matter how easy I tried to make things so I don't worry too much about this at the moment.

The biggest barrier to HA remains cost. The second biggest barrier is ease of use. Cost IS finally starting to come down thanks to ESP based devices and even Zigbee finally starting to come down in price in the UK.

I have yet to see a really family friendly HA system other than a hand-crafted one - which Node-RED certainly facilitates - mine certainly isn't though now that I have a smart heating system, the pressure is on to create a simpler interface for it - the app-based interface for the Wiser is OK but too complex for simple tasks, not comprehensive enough for complex ones.

1 Like

Pi's are great, but yeah the SD card issue means you need plans for when those fail. Also 1GB of memory is quite low, so they better for one-off appliance situations.

I upgraded to an Intel i5 NUC with 16 Gb of ram and an M2 SSD. The form factor is only about twice as big as a Pi, and its got the horsepower of a real machine, and reliable components.

I would consider a full VM server overkill honestly. Docker and a good backup system makes more sense to me.

My NUC just runs a barebones Debian installation that basically just has SSH, Cron and Docker installed. Its and easy base system to get set back up if things go sideways, Docker makes resetup super easy, and I just have to make sure I backup the actual Docker volumes on some schedule. Most of my system is fairly stateless in terms of run-time DATA, so that's mostly just config files... though I do have Plex running in a Docker container here and such too that I could backup, but don't.

Then I do the following:

  1. Run an nginx container for proxying everything on the NUC. Gives me control, new auth possibilities things might not have natively, and lets me add SSL wrapping to anything with centralized management of certs.
  2. Run letsencrypt via a Certbot docker container that handles cert renewal automatically. A cron script on the main system runs daily that stops the nginx container, starts the letsencrypt container to renew certs, and then restarts the nginx container when its done. Certs are all stored in /etc/letsencrypt of the host system so it can be mounted into and shared in other containers if necessary... but normally its only shared into the nginx container since that wraps all my SSL stuff.
  3. A series of simple scripts built from a personal template that will check for any updates for a specific container, and if it exists, pull down and update the container. I run these manually from time to time to update things.
  4. Each container gets a directory under /srv/ for its persistent volume(s) for logs, config files, persistence, or whatever it needs. All I ever have to do is backup this directory, and I can restore the entire system with a little work.
  5. Backup of /srv/, /etc/letsencrypt on some basis. All of my scripts are under that directory, and cron scripts a symlinked there as well, so this backs up my whole system.
  6. As an extra layer, I use node-red projects attached to a git repo on my NAS. Its more for the version control than as a backup mechanism though.

If my system dies at some point, I just need another basic server install, restore /srv/ and /etc/letsencrypt, run each of the Docker install/update scripts per container, and I'm up and running. There's a little fiddling on the host system in between there to add docker users / setup permissions, set ips, etc, but other than that, I'm restored.

I do have a separate Pi instance of node-red I run though that runs my Alarm system. That I just use the node-red projects feature to again sync to another git repo on my NAS for backup, but that system is purely stateless, so a restore there is just get the system setup, install node-red, and import the project and I'm back. I also keep an SSD image of this one around just in case, but I'd probably take a full crash there as an opportunity to properly upgrade the OS and everything on it since it doesn't get much attention :slight_smile:

2 Likes

OFF-TOPIC reply to some above posts:

Hey @rgerrans. I have just purchased Homeseer (HS3 Pro bundle, expensive but took advantage of the Black Friday deal) as I'm starting to test out Z-Wave as an alternative to wiring everything up with cable in the home we are building. I'll still be wiring lots of other devices up, but now considering Z-Wave for lighting circuits. Up until now, I've been using LightwaveRF in our current home (i.e. for testing). My plan originally was to go fully wired with DMX dimming, all lighting wire running back to a central location. Also on your point about MQTT - I too use it as a communication bus, between softwares / devices but also between Node-RED nodes (although not strictly necessary, I like to connect things together in NR using MQTT). Which Z-Wave stick are you getting? I picked up the Aeotec Z-Stick Gen5. Haven't given it a thourough testing although worked first time with my first Z-Wave dimmer that I bought. How about you? I have it hooked up in Homeseer then communicating over MQTT. I've found it to be reliable, although did notice some lag from time to time, and sometimes the Homeseer interface just doesn't seem to update. Oh well.

@jmorris644 Agree, the Homeseer interface sucks. Re the cost - I paid the money and didn't blink, but then I've become so obssessed with home automation (and relative to the cost of building a home) it seemed okay to spend that kind of dosh to get - potentially - the right HA solution. Or at least, a tool in the toolkit. I'm most interested in some kind of collaboration as well. I started to design a multisensor, this was before Homeseer brought out their one. My design was this: a recessed, ceiling mounted PIR with lux sensor and light colour sensor, plus tri colour LED to indicate statuses. Plus some additional sensors just for fun (barometric pressure, "clap" sensors, and temp / humidity sensors).

1 Like

My initial thought on this was to use Docker to get Node-RED up and running as it was before (i.e. all the Library addons) then restore from file backup using something like Dropbox. I was evening thinking of using Node-RED itself to run backups of its files. This is a really newbie question, but am I right in thinking the only actual files I need to backup are:

/home/me/.node-red/flows_servername.json

and

/home/me/.node-red/flows_servername_cred.json

? (Maybe I should RTFM about backups...!)

So I'm guessing your home is in a "revertable" state then. Right now we live in a top flat, with pretty standard wiring (ceiling rose, light switches, etc.) which I've replaced switches with LightwaveRF / Z-Wave dimmers, so all would work without an automation hub, and a Nest thermostat which would continue to work. The new home we're building, however - I had vastly different plans. For a year, my plans were to wire all lighting back to a central location. I even bought 24 channels of DMX dimming units (made for stage lighting) ... but the comment you made above is something I've been thinking more and more about over the last 6 months. I could build a DMX lighting based system with hardware controllers that would still work without e.g. Node-RED, but even that would be a nightmare if it went wrong and I was no longer around. e.g. network switch goes down, someone calls in a sparky, he has no idea, etc. So I'm coming around to thinking that I might wire up our new home (specifically lighting) according to standard UK conventions. Any thoughts on the above?

I've even gone to real lengths to analyse the different classes of device, their "mission critical" priority.... this screenshot of my notes should give you an idea:

Can you give any examples? I'm interested in this!

I'm very new to Docker, I've read a lot about its concepts and had a very quick play with Docker for Win. Need to play with Docker volumes more to understand how they work. Is a volume basically a mount point in your linux OS? i.e. if I store just the .node-red directory on this, presumably the size is low (megabytes) and the backup can be done by the "host"?

Sorry to not get this, but can you explain further? Are you proxying the Node-RED dashboard? How exactly?

Re your numbered list - your system seems pretty damned advanced / flexible. Presumably setting up dedicated directory under /srv/ for each container also requires that you point all the logs / persistence for various applications (nginx, Node-RED etc.) to that location? The use of Projects for version control... it's nice. I think I may need to delve a little deeper into Docker before I fully understand how it works. I keep trying to liken it to Hyper-V but keep getting confused that there doesn't seem to be a virtual disk, and how on earth when I pull an image does this only take like 2 minutes to download, etc... fun times ahead :thinking:

Is a volume basically a mount point in your linux OS?

That is how I do it, yes, you are basically mounting a specific folder on the host INTO the container at some mount point. Its a little more complicated than that, but thats the basics.

Sorry to not get this, but can you explain further? Are you proxying the Node-RED dashboard? How exactly?

I run nginx (a web server) docker container that port 80 and 443 of the HOST forward to port 80 / 443 of the CONTAINER. I.e., it answers everything on those ports for the server, and I get a CONTAINERIZED nginx instance just to stick with the theme of keeping everything nicely bundled up. That nginx container runs one default site which proxies the HTTP access to other containers (like node-red, a static files directory, some others) all under a single site basically. It lets me do somethings like:

  1. Nothing (like node-red) runs its own SSL configuration. It's all HTTP only at the Docker container. Then the nginx webserver provides the SSL wrapper, and proxy_passes everything to the local docker container. It lets everything basically be "dumb", and I handle the pain in the ass that is letsencrypt at one central location.
  2. I set a global deny rule for any connections outside of my network. I.e., NOTHING is exposed publically by default. Then I selectively expose stuff if I need to. For instance, node-red has no way of doing this: you can expose the entire admin interface, or the entire UI / nodes interfaces, but you can't pick and choose which HTTP nodes you want to expose. With nginx proxying, I can selectively expose one endpoint, and not everything... like if I need to expose one special place for IFTTT to call to me, but don't want the security risk of exposing everything.
  3. I can add anything that nginx supports to the fold. For instance for that IFTTT endpoint, I wanted authentication on it, but I don't want to add auth to the node-red-dashboard UIs. So I can just wrap basic auth around that exposed endpoint in the nginx proxy (exposed by SSL), and I'm at least significantly more locked down there.

Presumably setting up dedicated directory under /srv/ for each container also requires that you point all the logs / persistence for various applications (nginx, Node-RED etc.) to that location?

Exactly. So I have /srv/nginx/ which is the directory that I use for any folders I am going to mount into my nginx container. That means I can edit config files there, etc., and then just restart the container to reread them, etc. But this is NORMAL for Docker containers to tell you. Things built in Docker will specify directories you need to mount into it. Otherwise, every time you rebuild the container, you will lose all your data / logs. So there's ALWAYS thought on Docker containers toward SEPARATING the executing code, etc. from the DATA. This has benefits, like making it very easy and straightforward to know WHAT you need to backup, cause it's ALWAYS that segregated data part.

Docker is awesome and I don't want to derail here... but a container sort of is a virtual file system. Its a little closer to a chroot jail than a VM though I think. It eliminates the "this app depends on all these 500 things" stuff, because all the 500 things are just bundled with the app itself in one container. That means each container doesn't share common dependencies, etc., but they each have their copy and are happy. You can throw hard disk and memory at the problem and its way easier to manage. It downloads quick because containers are downloaded as a chain (this image is based on this image, is based on this image, etc.) and there are usually some shared core pieces (like the main distro level). So when you build a docker container you download that base container once, and everything based on it just has to download the rest which is usually much smaller. Also, these containers are much smaller than a REAL full distro, because they are the BARE MINIMUM for a Linux system and NOT a full install: an app doesn't need the full OS, only the bare minimum + it's own dependencies + its own code. Much, MUCH smaller than a full OS VM.

Hey Michael (@i8beef ), nice post! A very basic question: don't know the price (?) of your NUC, bit I assume you can buy a large series of raspberries for that money? Then you would also have a lot of cores... I suppose you have been thinking about that also (distributed Node-RED flows, or a swarm of Docker containers spread across those raspberries ...). Why did you choose for a single (more expensive) device instead? Only to avoid unreliable SD cards, or other reasons? Thanks, Bart

I have an HP Micro server running the free Esxi (still on 5.5). within that I run DietPi VMs, one for each main system component NR & MQTT, Home Assistant (Hassio Docker) and Pi-Hole (plus an EmonCMS VM). I used to run all on a Pi, but updating one part often broke other things so I have decided to keep them separate. DietPi is great, lightweight and you can script creating a new instance. DietPi also has a backup mechanism that seems to work well. A VM allows for taking snapshots before major fiddling so you always have a back-out option. It is not much more hungry for energy than a stack of Pis but more reliable. I also use a Pi for my Zigbee control and Bluetooth presence detection.

I wouldn't. Put them on separate VMs. HA inside a docker then use the Hassio backup. Put your Mosquitto broker on the NR VM as well. NR and Mosquitto install really well via DietPi install tool.

I did try it all rolled together but then had a major blow out of HA which required the whole thing to be rebuilt. Never again.

No, there are some others too. Another question from the last couple of days referenced this. If you look at the code under the section "zip" on the following page, you should get a better idea of the files.

  • flows_*.json
  • .config.json
  • .sessions.json
  • settings.js
  • package.json

I think are the critical ones.

I think you should also backup package.json.lock which defines which versions of nodes you are actually using, that's what you need if you want to rebuild the app identically to the current running one. I don't think you need .config.json, I think it is context data for the editor which shouldn't need to be backed up. Don't know about .sessions.json.

One of the problems I have with the current projects feature is that it doesn't include the package information or settings.js so I don't use projects but use git manually to manage the complete .node-red folder. Ignoring node_modules and so on.

1 Like

Yes, absolutely. Even the heating is only a controller attached via a standard mount + radiator TRV's that can be unscrewed and replaced with the old manual ones. Lights are all on plugs - UK wall/ceiling lights have odd wiring that is unsuited to many HA systems and, in any case, I find them far too expensive to ever be worth the effort.

I'd certainly think about keeping the wiring as simple as possible. Double the amount of plugs you think you need and then double it again. Get plugs with USB outlets too. I doubt that I would flood wire a house now except for maybe a few outlying areas if you have a large house with areas difficult to get direct WiFi to.

The exception might be the wall/ceiling lights where I'd maybe think about how they could be controlled better using soft rather than hard switches. I might talk to a sparky about fitting 3-core wire instead of 2-core so that you can wire up lower-cost soft switches if you need to.

I understand your analysis. Be nice if you published that, I think it would help others make decisions about HA as well.

To my mind, there aren't really any off-the-shelf HA systems that are both family-friendly and reasonable cost. Those are my two criteria. There are, of course, super expensive systems - the kind that get fitted into footballers houses. Never going in mine! Then you have the likes of the Phillips Hue. A work colleague recently fitted a load of these - but again, they come out really expensive. Then we have Ikea - they appear to be trying to reduce the costs but I'm not sure about reliability as yet.

With any of the mid- to lower-cost options, the controls are generally not especially family friendly in my view. Even with Alexa/G. Home integration, there still seems to be quite a lot of setup required before you have something that your non-techie wife or children would be comfortable using. That being the case, you are back to the same problem - any changes need you!

So then we come to the lowest cost options. LightwaveRF/HomeEasy - lowish cost if you can get things in sales or compatible kit from Ikea - but no feedback which limits their automation options. This was the low-cost option until the advent of the ESP8266. Now we are getting cheap WiFi based systems and these make it easier to network since you only need to get good WiFi coverage and you can do that with a number of extender or mesh options. Though you might still struggle if you are surrounded by 2.5GHz WiFi networks as it can be hard to find a free enough channel.

In none of these cases can I think of any way to create a control solution that would be easy or even possible for any family member to amend it without significant investment in time to set it up in the first place. However, with some care, you can reduce the addition of new devices to configuration rather than coding. For example, my lighting controls use a standard MQTT topic COMMAND/SWITCHnn so I can have 100 switches. I then have a JSON variable that maps switches to real locations. I use LightwaveRF remote controls which have 4 on/off controls, a group control and a 4-way switch. That gives me 16 switches per control - already slightly too complex for family members to remember so I put the most important 4 first. I can also create an auto-extending web interface for switches - this picks up the real locations and so is easy for anyone to use. Next step for that will be to make it offline/mobile friendly so that it can be used as an app on any phone or tablet. I also need to create a "scenes" configuration to enable multiple lights to be switched. Of course, all of the lights are also on schedules - this is another area that still needs some serious work in Node-RED as there is no easy way to expose schedules to the family for adjustment.

Well, you have the Ikea stuff of course but also I'm now using a Drayton Wiser smart heating controller that uses Zigbee to talk to the TRV's. This is the lowest cost smart system I could find. You don't actually directly talk to the TRV's - only the Wiser controller does that, you talk to the controller over WiFi. Still, it is an example of Zigbee slowly falling in price.