If you're relying on luck, it's going to bite you badly one day.
I'd agree that good commenting and good documentation are important, but my experience is that writing good code is 80% design and 20% implementation. Work out exactly what you want to accomplish, find the best algorithms / patterns / structures and then start putting it together.
I'd also say that testing and debugging are skills as important as coding, and will help in finding and eliminating as many bugs as possible (that way you don't need luck).
And don't be afraid to throw stuff away and rewrite - the second version is always better than the first...
That's not an unusual approach when you're learning a new language or when you're creating prototypes and experimenting, as you need to try out various ideas to see what works and what doesn't. I'm doing the same while learning how to work with Node-RED, and my flows are a bit of a jumble at the moment.
Once you decide what you actually want to do, i.e. what project you want to build or which problem you want to solve, then you can start with a clean page and work out a design. There's nothing wrong with prototyping, just don't be tempted to keep big chunks of it in your real ("deliverable") system.
I suppose since I've been coding for >40 years, I've learned, sometimes the hard way, that up-front design is worth the effort
You can google this topic, it depends on the person, the company you work for, the type of code, the language it is written in.
You will find: "Good code is self-documenting." "Even if you think you write really obvious code, try reading your code months or years later"
In the context of node-red comments should not really be needed (perhaps a comment-node for the flow itself)
Ideally, no programming should be necessary (that is the intend). Start of with all the nodes you need before jumping to function nodes that complicate logic.
This is the nub of the issue. That is why excellent code is not even started until after someone has captured and documented all of the requirements in detail and then written a detailed design brief.
In my view, any IT project (whether code, hardware, information management, etc) that doesn't start by documented very clearly the scope and the requirements is doomed at least to make many mistakes, take several times too long and have a large price-tag attached.
The element of "hope" that you mention should of course be replaced by testing.
The percentages are arguable but something like 25% requirements/design, 25% coding & documenting, 40% testing and 10% implementation wouldn't be unusual.
However, when it comes to personal projects, these numbers tend to go out of the window even though they shouldn't really.
Most people's approach to home automation and similar projects is 90% experimentation and 10% swearing.
In my own code, you will see anything up to around 50% comments, occasionally even more (typically with utility libraries for example).
Partly, that is because I am not a professional programmer these days and so I very often have to look things up. When I do, I try to make sure I leave myself a comment so I know why I did something in a particular way.
Well if you include in that proactive checks on function/API arguments, I may well agree.
It is a truism of programming that code will be read 10x the amount (at least) that it will be written. That's likely to be true even if you are the sole programmer. That figure is likely to be much higher in an enterprise setting.
I'm going to respectfully disagree at least a little here. It is just as easy to create flows that require reminders as it is code.
A note of caution here. Function nodes can GREATLY simplify logic. I've seen (and written) some very complex flows that are expressed very simply in code.
I would say that it is more important that you understand the users level of coding knowledge and experience.
Once you find yourself doing, for example, a chain of change and switch nodes to achieve something, you might want to ask yourself whether it might be clearer, simpler and possibly more performant to do that in a function.
But of course, there are no hard and fast rules here. That's what makes it interesting and a creative challenge.
I think, most of the time the code for the actual (business) logic is the easiest part, it is the exception handling that takes most of the time, the part that will guarantee that all your system parts, also in a distributed setup, are able to recover/restart/reconnect/get into sync etc etc
Indeed. Now it is called "Requirements Capture" done by Business Analysts and Systems Design (or Architecture) often done by Solutions Architects.
I learned structured systems analysis when I was starting out - COBOL on mainframes in fact - and it was a great discipline. I am not convinced that newer approaches work very well at least in that they seem to need more people to achieve the same thing.
Well, there's no such thing as a "standard program" I guess so sometimes the logic code is easy but the wrappers are hard as you say. But sometimes the logic in nightmarish too.
The approach these days seems to be more about getting something out the door being first priority, fixing the potential mess later gets second priority. I have seen this even in such esoteric areas as financial systems.
And yes, I was recently amazed at the number of programmers and support staff employed by my local Fujitsu for a project which, in the end, was massively scaled back.
Now that this is way off topic, I'll await a big stick ...
I used to work in an agile based testing environment, black box testing at that. With our clients releasing a new version every few weeks, it was easy to see which client had a proper design planned out before starting, and which client was just trying to push in as much functionality and skipping something as basic as unit testing. Then there was the one client that built alarm systems and still used the waterfall approach. Releasing once every 18 or so month, but a lot more sophisticated.
We did integration and acceptance testing for all those clients, but for several we started with regression tests to figure out if we had to send it back to the devs immediately or two days in...
As for comments, yes for comments in general, but clear and concise naming of variables/nodes in itself helps a lot too. I put comment nodes in my flows to explain what the basic part that follows does, similar to Julian’s screenshot. On top of that I give switch nodes a name that’s asking the question of what it does. When I use inject nodes to insert a basic value every X seconds/minutes, the name of that inject will be “Every X seconds...”.
For example to update a dashboard graph of the solar panels output I have to poll the data logger and read out an XML file. I get an event through a contrib node when the datalogger goes offline/comes back online, which I store in a flow variable. The rest of the flow that does the polling looks like this in text, with the name of the node displayed.
Every 10 seconds... (Inject node)
When the solar panels are online (switch node)
Connect to the logger (http request node)
Read out the stats (XML node)
Separate the values (switch node)
Display the temperature (dashboard text node)
Display the current (dashboard text node)
Display the voltage (dashboard text node)
Display the power over time (dashboard chart node)
That was the first flow I created in node-red. It is very verbal, but also takes in a lot of space. Compare it to writing code and adding a comment behind every line to explain what it does. Often that’s overkill, sometimes it’s useful. I still give inject nodes a similar name if the interval is important. It allows you to show your flow to others and have them quickly understand what is happening. And they don’t need programming experience for it.
At the job I mentioned I would also write automated tests. I would abstract the webpages to enums, then write around those enums to specify what was going on. The actual tests then used those enums to show the parameters it was tested on. I’ll add a screenshot when I’m near my computer again. The test cases itself looked very verbose as a result; it was a one liner that got executed elsewhere, but it allowed other test engineers without the programming experience to read over that file and check if the test cases were correct.
And because I had to deal with automated testing, if a new release happened some old bugs could potentially be fixed (lmao no they never were), so the code for the automation had comments with the issue number for parts, so if it were to crash on a fix in the side, either the error message would show the relevant issue details, or the comment in the code near that line would.
If I make edits to single files, I often add the date and my initials in a comment with a short description, so I’ll remember later on what I changed and why. I do the same with configuration files too.
So yes, comments are powerful, in Node-RED too, but learning when to use them or another solution instead is always going to be the question to answer
Haha, indeed. Though I was more referring to validating user input, error checks, etc. Trying to do that in Node-RED often leads to a bit of a rats nest. I was thinking more about custom nodes when I wrote that.
Yes, totally true and I am much better these days about adding names to my nodes
Additionally, naming the inputs and outputs can be really helpful too.
Yes, I do that too, it was especially useful as uibuilder v1 developed. Though I cleared a lot of them out for v2 as a lot of that history was no longer relevant since so much of it got refactored and (hopefully) simplified.
Sadly, test automation is something I still don't have my head around. I've tried a few times but it kept slowing down the creative process and so each time, I've given up.
I'm with you here. I barely have enough time as it is so this part of development has always been the thing not done. I know and see the benefit but as I mostly develop alone, I know what I've done and pretty much debug each part as it's developed making writing of tests seem somewhat more effort than benefit.
My job was test engineering. Sitting at a pc 7 hours a day and stare at a screen. They say as much of testing is in reviewing documentation and designs, but in my experience it was mostly manually executing test scripts. One test dealt with consistency of fonts, colours, contrast on each page of a website, plus layout. It took about 5-9 hours depending on how unlucky you were with device or template to test it on. The one referred to as “hw027-test” was particularly infamous for having a contrast between background and text on most of the site of less than 1:1, whereas 1:2.5 should be a minimum for accessibility. The form that took most time testing had a dark grey background on the page with light grey text, and inside the form white placeholder texts/labels on a soft white background.
Having to run the test on a Nokia with Windows phone was another reason why it took so long: the client requested testing on physical devices including WinPhone. The sites were very much not compatible with IE, let alone IE mobile, but they were a Microsoft partner so thought they needed testing for it.
With new releases (they would roll out to both beta and production at the same time, until one particular new bug rendered all their clients’ production systems unusable and they got requested by a customer to roll back ) every 2 weeks that looked like 2 major bug fixes, 1 minor bug fix if related, and 10+ new sets of features we had to test (causing 40-60 new bug reports per release on average). Step one on those days was figuring out which part of your manual test script had to be changed to accommodate for those new features. The basic script came with about 250-300 test cases to be done manually. They were all worded like “check if the colours are consistent over the homepage and regular other pages”, “check if the layout is consistent everywhere”, “check if the fonts are consistent”, and so on. The first 50 or so tests would cost you about 2-3 hours depending on your luck aka which template and device you got.
To make it better, every time you would have something to note about the check, you’d have to check our local bug tracker to see if it existed already and if not create one with careful reproduction steps and evidence in (marked up) screenshots and videos. Those attachments then had to be stored on version control (I was the only one in the company who used commit messages, every single office document went onto version control). If the issue was known you had to add the version info, template and device to the issue with the date to confirm it was still present. Some of those reports had over 15 lines of confirmations from different releases that they were still present. That included typos in static text (that could not be edited in the CMS), and contrast on the infamous HW027.
After the first set of basic style tests you would have to go into the CMS, make a change there, save it, then reload the page you were testing, do the next (set of) tests, rollback and start over with the next part. CMS was severely lacking in usability for high intensity jobs like that.
So in short, getting to automate tests rather than executing them manually was perfect. Absolutely great even. And just for the record, though I left the company 2.5 years ago, checked HW027 beta environment a few months ago and the contrast is still not fixed. It had to be checked and confirmed every 2 weeks... When I joined the company this client had about 1600 open bugs on our local bugtracker. When I left less than a year later it was 3000+.
I can sympathise. I worked as a game developer for many years, and the testers had a pretty tough job - its not, as many people think, playing games all day