That's the question.
No it's not.
Just experienced bad example of holding back the nodejs based home automation system partials updates until something and then another something. It is bad idea. Update all and always as soon as possible. That little not updated thing bites hard and you can't even understand from where the issue is coming from.
How true. While corporate IT has to be cautious about regular updates for various reasons. This kind of development and runtime environment really does need regular feeding, watering and toenail clipping.
My home systems get updated ASAP at the OS and application layers. Actually my work systems do as well since I'm in the privileged position of setting strategy not having to abide by it! (I don't generally use one of our corporate laptops and I have separate policies for access to key systems - nice and easy for me though it does mean that I have to be extra careful not to screw up!).
Anyway, the main exception(ish) that I apply to the home systems is where there is a major change to Node.js or a major change to Node-RED. In those cases, I will always take a full backup of my Node-RED systems before updating. Been bitten too many times with that.
I have eight systems running node-red, mostly Pis. Six of those are in 'production' situations doing a job which is not changing. If every time a new version was released for any one of the nodes I updated them all I would forever be updating and testing. For systems that are working without issue then I only update them occasionally, just to stop them getting too far behind.
As long as you have a test system that you do keep updating, this probably doesn't matter.
I think that what our Estonian friend is saying is that there are risks involved with not updating for a while in the very fast moving world of Node.js. You may end up with an unexpected breaking change and if you ended up updating lots of things together, you may not be able to identify what broke your system. This can be very time consuming to analyse.
I certainly agree with that.
Is there a question of unattended upgrades lurking in the background here? I'm old enough to belong to the "if it ain't broke, don't fix it" school. My home automation runs completely off the net and tends to fall several generations behind the latest releases. (One system is at NR version 0.19.4.) I don't think there is any way those installations can break unless I cause it. Development and testing is on a fully updated system, and every year or two I just take down a production machine and stand up a completely new one, usually both hardware and software.
Do you mean that the system might be being updated without your knowledge when you don't want it to, or are you asking about how to make it automatically update?
Well, that is the issue really. When you do need to update, you will be updating so much that any new errors will be impossible to track down. Was it a breaking change in NR, in some deep dependency, in Node.js or some OS library change?
I've never totally subscribed to that in over 40 years. In a production, enterprise environment, you still need to keep updating, just not as fast. You can't afford to let things get too far out of date because you end up with an impossible situation where the cost of a series of upgrades across a large user population is more expensive than you can get budget for. This is why you will find organisations still stuck on Office 2010, Windows XP, etc. Not because they don't want to upgrade but because the cost and disruption of doing it is just too expensive.
With the way that most modern systems get updated, it is very often safer and cheaper and less disruptive to accept "evergreen" rolling upgrades with less testing that you are used to than trying to do periodic big upgrades.
In a home environment where the impact to the "organisation" is probably quite small, automatic upgrades are easier and "cheaper" to deal with. Just making sure that big version upgrades have decent backups so that you can roll back if needed. Indeed, most auto-update services will not likely apply breaking changes unless you intervene (OS major versions, Node.js major versions, etc.)
Yes and no. As mentioned in an earlier discussion, there seems to be growing interest here in automating minor OS and software upgrades. The user asks for them and knows when they happen. On the other hand, even if an upgrade is not technically "breaking" (major), it could create incompatibilities or require re-compilation of some nodes or libraries. Package managers are smart but not infallible.
While I've had a few issues with serial port related and sqlite and recompilations, this is very much the exception for minor updates. Only really becomes an issue when taking on a new major version of Node.js. Hence my previous comments.
True, but not really relevant to my way of working. I always have a fully updated and functional system that is deployable but not deployed. Changes to it are incremental, and I test it by running in parallel with the production system and comparing outputs or swapping temporarily if one-of-a-kind hardware is needed. I replace the production system only when the features, performance, or stability of the upgrade becomes really compelling or if there is a hardware failure in the production system. I suppose if I were connected to the internet, I would add a reported vulnerability to the list. I guess this is unorthodox, but it works for me.
It is easy to keep your system in condition you like. But then you have friends, taking lite coding as tiny hobby and some things created to control simple stuff. Can't blame them if all steps are not followed by best standards. But if then something fails hard there is not much options. Throw everything out of window or put some bottles of good beer to fridge and call the guy you know can handle such situations. That was my story. And to update was then my advise. And still is.
This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.