Choosing an architecture for Home Automation

This became soooo much easier with the latest versions of Let's Encrypt. Now that they support wildcard certs and DNS validation, you no longer need to expose anything local to the Internet and you can have a single cert for multiple (sub)domains.

You do need to set up your local network to use DNS though instead of IP addresses. Fairly easy on most modern routers now.

One thing to watch out for when working the way you suggest - it is great when everything is on a single machine. Not so good if you later decide to split things up because you are still using HTTP over your network. Not a massive issue for many people but still a consideration.

The other thing is that, while Docker is lightweight for a VM solution, that doesn't make it "light". It is complex and can be hard to maintain, especially for novices and where containers need regular updates/upgrades. I find it real overkill for an SBC based (e.g Pi) solution and not worth the effort - but that is just me.

1 Like

A quick search showed up an expired eBay (US) entry for $350. UK prices seem to start around Ā£300 for an Intel unit or Ā£200 for a 3rd-party.

So, if you were generous and allowed Ā£50 per Pi (Pi3B+, case, SD card, USB Power), you would need to be thinking about 4-6 Pi's before it would be worth it on price alone.

Yes Julian that is my question: is 6 raspberries not better (cpu wise) compared to a single NUC? Perhaps not...

Personally, I would happily handle around 4 and most likely find that a lot easier to manage than faffing with Docker.

Also, the TDP on the i5 NUC is around 28W which is a lot higher than the 18W that 4 Pi 3's would likely top out at.

However, you can put 32GB or RAM in the NUC and it does have 2 M.2 SATA ports and Thunderbolt.

So all-in-all, you at least have some choices. I am tempted to get a NUC or equivalent now that I've seen how much they've come along. More to offload some processing from my aging NAS though.

But there are certainly advantages in keeping things separate and most of what I do with HA doesn't need anywhere near the power, memory or disk performance of the NUC - given that I've happily running my HA system off a single Pi2 with a 32GB SD card for some years now. Moving to the Pi 3 that someone gave me will give me some additional headroom and let me do some additional stuff, the Pi2 was pretty well fully loaded but then it was running NR, InfluxDB, Grafana, Telegraf and Mosquitto. It is actually several years worth of InfluxDB data that is the biggest load.

On the Pi3, I've enough spare resources to be able to run webmin which lets me auto-update Rasbian. One thing less to worry about.

1 Like

I think you'll be happy with Zwave. I find it to be pretty robust with plenty of device options

I do a bit of a hybrid. I use links when the interconnects are 100% internal to NR and MQTT when they'll also be accessed by other systems. Just trying to minimize overhead on the system.

I had an Aeotec from my time usingOpenHab that I was using but I've had a couple of times now where it lost all communicaton with my door locks and I couldn't remove some dead nodes from it. Plus I would occaisonally have to reboot my Homeseer due to the Aeotec refusing to initialize after locking up when I was working on it. So I took advantage of the sales as well and bought a HomeSeer SmartStick+ It seems a bit more robust given the tighter hardware/software integration. That said, last week I was having random Zwave commands going to my system (everything turned on at 2%, my bar fridge outlet getting turned off at random times). I did isolate to the Homeseer side but never figured out what was causing it and haven't seen it repeat recently.

Which MQTT plugin are you using? I use the older "MQTT" with virtual devices suhscribed to my MQTT set topics and linked back to my Zwave devices. I've played with the newer "mcsMQTT" but haven't been successful in getting it to run reliably.

Let me know any other help I can provide.

A Pi and a NUC are apples and oranges. Pis are great for one off appliance things. My home automation stuff just outgrew its memory footprint, and the SD card thing us enough of a concern for a core part of my system that it was time to side step it either with an external drive on the Pi, or upgrading. I chose the latter.

Price wasn't really a concern. Think the NUC setup cost around $500 all said and done after memory and disk. My choice here was more needing more power and reliability, but still wanting a small footprint appliance device feel. The NUC fits the bill.

This became soooo much easier with the latest versions of Let's Encrypt

Not using their wild card certs yet, though I'm kind of a perfect candidate for it. The initial cert grab isn't bad, but its the short RENEWAL time that's a pain. If there's a better way than certbot to do that, I'm all ears, cause its a right pain in the ass for someone who lives and breathes web development... gotta imagine its borderline for non-tech people beyond "run these series of commands and pray".

One thing to watch out for when working the way you suggest - it is great when everything is on a single machine. Not so good if you later decide to split things up because you are still using HTTP over your network. Not a massive issue for many people but still a consideration.

I treat my local network as trusted here. Don't have much to protect anyway, and if someone got in far enough to MitM traffic, I'd be completely owned already anyway, with much larger problems than my home automation system. The added complexity of other security measures I could add around it wouldn't really give me much here.

Docker is daunting if you've never used it before, but it has a lot of advantages. It has paid for itself in spades for me personally on updates / restores / system stability and cleanliness. Admittedly though it scratches an itch for me in that I like to keep my systems as clean as possible, and nothing keeps it as clean as Docker (because any program I uninstall or upgrade, all its dependencies go with it, so there's no worry of left over crud on the main system, which stays pristine). I use it a lot as a developer when running stuff like Redis, RabbitMQ, and various other pieces though too, so I'm fairly familiar with it.

The best piece of advice I can give on docker is this: Script all your docker commands in bash scripts. I've got a template I use that wraps up installing / updating containers that I just write my 'docker run' command right into (one day I should really explore docker-compose), and it makes my updates literally a one liner. If you are trying to build a docker command every time you update, its a pain. Better to save that command for later so you can just rerun it.

Yes Julian that is my question: is 6 raspberries not better (cpu wise) compared to a single NUC? Perhaps not...

Apples and oranges here. If I was gonna run that many Pis for a single system like this, I'm talking about a small kubernates cluster running all my Docker containers, etc. A fun project I might do one day just to play with it, but it was just much simpler to upgrade. Also that wouldn't do much for distributing load that legitimately needs more than one Pi can provide (i.e., my Plex Docker container doing transcoding will sink a Pi).

And I don't want to run 4 separate systems if I can help it. I have more of an appetite for dealing with upkeep than most, but the extra $200 is worth it in my sys admin time alone over time.

As for actual capability, the CPU isn't really a problem (aside from that Plex transcoding container). The 1 GB memory limit IS however, and once you hit it, you're screwed, you've outgrown the device. And SD cards are just poor choices for OPERATIONAL disk storage, so I was looking at a separate external SSD anyway just to make sure my system didn't one day crash due to SD card failure (granted, with a kubernates cluster you'd be running an external disk as a matter of course anyway). And once you factor in that amount of power + a hard drive, etc. you're close enough in power usage, have a lot more complexity to manage even going separate systems instead of a cluster, and you're still way below the capabilities of a single NUC. And you're still stuck on ARM instead of x86 which carries its own advantages. Overall, the NUC was just a much better choice for me.

So I don't know, maybe its worth it for some workloads... but not mine. One system nicely partitioned with Docker is far simpler to manage, and I want to spend as little time administering as possible :slight_smile:

2 Likes

It is now trivial. You just use the acme.sh script and set it to auto-renew! I've gone through my first renewal cycle with no hitch whatsoever. You do need control of the DNS for the domain you are using of course, especially if you don't want to expose your LAN to the Internet (which was the main stumbling block for me previously). You need a DNS provider that supports an API that Let's Encrypt understands. I am rapidly swapping to use Cloudflare for all my DNS use and that certainly supports it. Then the acme.sh does everything for you. Just leave it set up on your Pi and that's it.

You may wish to think carefully about this as your systems mature. It is a short step from an isolated network to a potentially exposed one. Also a short step from "Don't have much to protect" to realising that your boiler, greenhouse, garden sprinklers (water meter use?) are suddenly exposed along with a whole bunch of private data. I'm not saying to go mad, just to keep thinking about it. As is often the case, some thoughtful small steps can add up.

I get the use-case, just not found it worth it for myself. My Pi's are VERY stable - with the minor recent hicup when we were away (isn't that always the case!) which wasn't the Pi's fault really but the UPS failing.

I like to keep a spare Pi to experiment with and to keep the live one clean. I also don't do system restores - ever. That's because, over the many years I've been messing with and working with professionally various computer platforms, I've generally found that anything that has been around a while will always benefit from a clean install - this is true even of Docker which will accumulate crud during updates. So I focus on making sure I can quickly rebuild the basics and only restore specific configuration settings. For me, Ansible is more interesting than Docker in general.

There are some exceptions where I would certainly use docker - if needing to set up a mail system on a shared computer for example. The isolation that docker provides would be invaluable there. The 5 or 6 things I need to install on a Pi for HA use simply don't warrant that for me.

This is really true for ALL platform and system configurations. The more you can script, the easier things become.

Interesting - because, for me, the admin overheads of docker far outweigh the absolutely minimal time I spend feeding and watering my Pi's. Especially with my Pi3 which has webmin installed as this does all of the apt upgrades for me. So the only thing I ever need to do is occasionally go in and update the nodes and I could easily automate that through Node-RED if I could be bothered.

Whilst I agree in principle, in practice it is very rarely an issue. True, my Pi2 is struggling a little but that is more due to InfluxDB handling multiple years of data. If I had more memory, I'd probably keep more data - but I really don't need to and I never use it! So I'll more likely simply add a command to truncate the data automatically, InfluxDB is good for that.

As I move to 2 Pi's, the problem will go away anyway.

Iā€™ll just leave this here - https://www.raspberrypi.org/magpi/bitscope-3000-core-raspberry-pi-cluster-computer/

this is true even of Docker which will accumulate crud during updates

What exactly are you seeing accumulate with Docker? As you upgrade and delete containers, everything gets deleted at once. Are you talking about the image caches? Or if you just mean Docker ITSELF and the OS upgrades, yeah that's fair. The time in between needing to rebuild these types of things almost always justifies a full reinstall, true... but I'd wager I can be up and running on a fresh install with my Docker install before I would be done doing actual installs on a fresh machine of everything with a significant portion of time to spare. Just depends on how much thought you've given to the automation of your restore I guess though, because with enough work you could do the same with a pile of scripts the other way too. I just think Docker still saves me a lot of time and effort over that.

Interesting - because, for me, the admin overheads of docker far outweigh the absolutely minimal time I spend feeding and watering my Pi's.

Im not sure what the admin overhead you are envisioning here is either. Your experience doesn't match up with my own. I've got some one-off pis I use for specific purposes (i.e., my alarm panel one needs to be physically next to the alarm panel for GPIO usage, by z-way one works better as a Pi appliance as its a single distributed SD image, etc.). And my Docker containers COULD be split between different servers just as easily if I needed to. In fact, more easily, because I don't have to deal with installation and dependency headaches, I just move the files and run the docker command on the new host and I'm done. That all seems way easier to me, and has been in practice, so I'm curious what you've seen that gives you the impression that its MORE involved instead of less. I mean it IS another layer you have to understand to a point, but its a tool that SAVES a lot of time and effort for me, not overhead.

has anyone ever tried a raspberry pi with a read-only filesystem? Would this not eliminate most of the stability issues without having to get an UPS?
If you need to save something, then use an additional usb stick for just that.

This feature is called overlayroot and is available in armbian (https://docs.armbian.com/User-Guide_Advanced-Features/).
I use it on all my sd-card devices (raspberry, orange pi).

Just dont forget to turn off automatic updates...

I am using a more down to earth method with tmpfs and mount option ro for that purpose. I just set this up a couple of weeks ago on one of my pi, but no long term experience yet.

How long do you use that?

overlayroot also uses a tmpfs as writeable layer to keep the system working...
It's easy to handle and en/disable.

My oldest "machine" (opi zero) runs since more than 2 years.

Hi,
Another approach for a NR server is to use Android with termux and a keep alive app. Advantages: economical, reliable, compact footprint with added bonus of ups in built.
Cheers,
Ken

1 Like

Seems a bit extreme to be honest.

I have no stability issues on either of my Pi's. You still want a UPS if you want your system to keep running and having a large, decent SD card means that the system is really stable. My live Pi has been running stably for some years now.