I have been working with node-red for a week or so and i like it a lot, but I have never been able to view the console commands as I don’t know where to find them, so i have given up on that. With this in mind, i wonder what is the benefit of running Node-red in a docker vs running on the pi itself. I am starting to think that I should just install node-red directly on the pi itself as I will have access to the logs more easily and don’t have to worry about mounting volumes etc. Also i was looking and found that there is a HomeKit contrib module for node-red but there appears to be issues installing it in docker.
Operating system level virtualization like docker or jails is more for situations where you plan to run multiple microservices on the same host OS instance. I would personally recommend avoiding docker on resource light machines like the pi variants unless you have some specific need (container isolation or similar perhaps.)
Ah so if i didn’t want to have multiple workloads deployed i would benefit from docker by having 1 workload (VM) and deploy mosquitto, node-red, and home assistant and in that hypothetical situation it would be beneficial to use docker for all 3?
Currently i have this setup but with a pi. I have been nervous of it dying so I am backing up to google drive using rclone and I do like that everything is tidy in one place. However, I am considering if I should just split everything up into different RPi until i build up my capital to build a server for it OR i just buy a cheap mini pc and run Linux and install docker on those.
How heavy is node-red compared to home assistant or a program like that? I feel that home assistant (i know this is not the right thread for this) is something that needs to run on a VM since it does a lot of read/writes for the SD card, however mosquitto doesn’t seem to have this reputation (I could be wrong) which leaves me with node-red. How heavy is it?
Also in a future setup should I have all 3 services in docker containers on a VM OR have them all be individual workloads OR have a mix?
With mosquitto, node-RED, and homeassistant/hass.io I'd recommend just running all 3 together on the same bare metal if you're running on pi; especially if you'll always be using them together and are the sole maintainer/customer. If you're worried about writes, a USB drive or NAS may be a good priority for your next expansion opportunity.
Mosquitto does not do a lot of writes to SD card, even retained topics are not written every time they change.
I suspected as such. Do you know how much impact node red has?
Good suggestion. I think I’m considering migrating to Hassio mainly because it’s all integrated already and makes it easy to install Mosquitto and Node red with one click vs what I had to go through installing node red and Mosquitto using docker. Wasn’t super hard but definitely ran into more issues. It helped me to learn which is great and I’m all for it but the first time around. Doing it again I’d take advantage of Hassio.
I think I’m my next build I may try to install Hassio on a VM.
It obviously depends how much processing it is doing, but I run node red, mosquitto, influx and grafana on a Pi 3 controlling my home automation and my weather station. In addition it is my network file server (with a USB disc), VPN server, and runs pihole add blocker. It does not seem to be stressed in the slightest and is typically running at below 15% CPU utilisation.
I run all my services in Docker in Pi, it handles the workloads beautifully. In production for my house, I run 3 instances of NodeRED, one for MOsquitto, one for alarm.
Benefits? I love it, makes fault finding so much easier, I deploy the docker images to Pi with Ansible, and can deploy same to my Mac on Docker. Are constantly busy with a next version on my Mac, then I use projects feature from NR to clone back from Mac to bitbucket git, then from there to production mac.
I love the setup!
You need to be careful with Hassio on a VM - as you will end up wanting access to hardware at some point - if you have not chosen the right virtualisation platform and/or the right host to run it on this will end up being a problem.
I would definitely NOT recommend docker for a first time user - however if you were to go down that path - use some of the prebuilt images that are available for each of the components instead of installing yourself.
I have about 20 docker images running on a low powered Intel box that is also the home NAS system. Docker is fantastic for being able to quickly run up and test new pieces of software/versions etc without having to worry about polluting the host - but you have to understand networking and mounting volumes etc for persistence and inter-docker communication
Craig
I agree, my learning curve for docker, Ansible, and NR Projects was steep, with lots of frustrating nights, but now it is a breeze... onto this way for last 2 yrs.
Yea well it was my first time using docker but I got it working. I need to use my EE degree for something so wasn’t intimidated but I did have trouble getting it working at first and it took time to tweak the docker run settings. However I have just felt that if my raspberry pi would break how will i get back on running quickly.
Eventually I do want to move to a mini PC with Linux to have the HA docker Tun faster. I have to restart it and it takes a min to get it restarted on the Pi
On my Pi's I get about 1 write per second and that is virtually all log writes. They could be eliminated but honestly, if you are using decent SD-cards with plenty of space, you are very unlikely to ever hit the write limit these days.
Cheap cards and small cards just aren't worth it. The cheap ones die depressingly regularly and small cards don't give the system enough headroom to spread the writes over the whole card. A 32Gb Samsung Evo or Evo Pro is still cheap and plenty good enough.
You should also take care with power though. Make sure the Pi has enough power and try not to cut power to it without shutting down first. I have mine, along with my other networking gear, on a PC UPS.
I really think that you are swapping one set of issues for another. Why burden your Pi with Hassio if you don't really need it.
On a bare Raspbian, all you need to do is to run Dave's install script then install Mosquitto the standard way. You don't need to mess with Docker which is completely overkill. The standard install takes a few minutes, most of which is spend waiting for the Pi to do compiles/installs. There is almost no user input needed.
My Pi's aren't typically CPU bound but they do get memory bound quite easily. InfluxDB in particular can hog memory if your databases and writes creep up. Once you start getting memory swap, then you can get some serious spikes in usage thanks to the slow write speed of the SD-Card interface.
I completely agree.
Also, it is trivial to run multiple instances of Node-RED if you install locally rather than globally.
I don't deny that Docker has its uses but it has its own learning curve and its own issues that will trip up people who aren't used to it.
I've also used Ansible on my Pi's but found that it, too, really wasn't worth the effort.
Rebuilding my Pi is easy. As long as I remember which system files I've changed - I make sure they are all backed up so that is also easy. Reinstalling the whole system only takes a few minutes of my actual time though perhaps an hour of Pi time.
If you want to go Docker it might make sense to go with a Raspberry Pi cluster and have Dockers on the PIs. I have done all three and Docker on a Server is fine but you should only run production environments not test beds.
Using individual PIs is ok but a cluster provides additional benefits of being able to run multiple instances for what otherwise would be large deployments. Another cluster favorite for me is Homebridge where you may need multiple instances and where I have found there can be interactions that can take a single instance down, whilst smaller deployments are easier to debig.