Web technologist in Auckland, New Zealand.
I enjoy being a Dad, snowboarding and building amazing user experiences on the internet. more
I was recently discussing the problems of software security and code quality. These articles inspired the conversation;
There are lessons for us software engineers/teams in these articles, especially in “They Write the Right Stuff”. But all of these quality control mechanisms come at a cost.
But first, let’s put the article in perspective;
"If the software isn't perfect, some of the people we go to meetings with might die.
Now; What are the costs?
Most notably, the Shuttle Group followed a strictly waterfall process;
“...about one-third of the process of writing software happens before anyone writes a line of code.
“The specs for [a change that involves 6,366 lines of code] run 2,500 pages.”
Second, the Shuttle Group was expensive to run;
“... the group’s $35 million per year budget is a trivial slice of the NASA pie, but on a dollars-per-line basis, it makes the group among the nation's most expensive software organizations.
Extrapolating some other numbers; The program had about ~424,400 lines of code in 1996, which had been worked on for 21 years at a total cost of ~$700M. That's ~$1,650 per line of code. (Not that lines of code is a meaningful measure of code complexity.) I've not accounted for inflation.
The economic costs are just the software; I.e. travel and office expenses, but predominantly, people's time (salaries).
I first read that article about 9 years ago. After rereading it last weekend, I realised it had a profound impact on how highly I value software quality; I have always strived for the most elegant, simple and readable code.
More recently I've learned that it's almost never worth the effort to get software to its highest possible quality. The problem with making software right in the first round of development is that you probably haven't found the best solution yet. You probably missed a more elegant solution. And you probably have not fully understood the entirety of problem and all its edge cases.
Further, the more time you spend refining a work, the more emotional value you give it. This makes it harder to recognize or admit that it might be the wrong solution, and much harder to delete.
(Good code deletion skills are highly valuable, by the way!)
In other words, we have to find the balance between "It works" and "It is right" (correct, easy to read, simple, elegant, deduplicate, etcetera), considering the risk of not making it right (yet) along with the time, people and money available.
Usually it is better to get something out which only mostly works but is not totally “right”, then only make it “right” once we understand the problem and possible solutions better. That might be hours or months later, depending on the scope and scale of the problem and solutions, how fast the feedback loop is, and what other priorities come up in the meantime. Or it might be never.
Often "making it right" happens when that code needs more work for a new feature. That is why we groan at the old smelly corners of our code that we have to work with that were "so awfully written". And why we like blame the since-departed colleague. We fail to recognize those smelly corners were probably perfect at their time (given the other constraints).
This illustrates the importance of getting a fast feedback loop. Automated testing, good logging & analysis tools, frequently publishing code (even if it's not quite right), a small core of highly engaged customers (and/or awesome testers) all help tighten that feedback loop.
Automated testing is especially important when refactoring old smelly corners. Every edge case and subtlety must be captured by tests before you can safely start refactoring or modifying it. But once you have complete test coverage, you can unleash the hounds on the refactoring without concern for breaking it.
Obehave offers automated website tests for everyone. Anyone can write automate tests with Obehave, not just programmers. Tests are written in Gherkin, a plain English syntax pioneered for Behaviour-Driven Development (BDD). Obehave tests can run on a schedule; every hour, day or week. And can integrate with your continuous integration environment.
I’ve recently come to appreciate
Make it work, make it right, make it fast.
— Kent Beck
Applied to an individual development task, it means to distinctly separate each of these phases of development and to only transition from to the next task when the previous one is complete;
Here are my recently learnt tips for advanced usage of the wonderful Webpack
--hotdoes not do would you would expect.
webpack-dev-serverdo what you'd expect
--hotto do, even without that option.
--lazystops WDS from doing what you'd expect
--hotallows modules (files) to be updated in place, without reloading the webpage.
The Drupal security team published a PSA to warn about upcoming security advisories. I shared my advice and predicted attacks within the hour after the security advisories are published. The security advisories are now published. Here is my followup.
I applaud the Drupal Security Team for warning about the highly critical updates. However the public service announcement (PSA) left the impression that this event was going to be much more serious than it was. Such a PSA would have been perfectly appropriate for SA-CORE-2014-005 "Drupalgeddon". But the only PSA there was in hindsight.
I guess it is resonable for the Drupal Security Team to be over cautious, especially given the lessons learned from Drupalgeddon fallout. And of course, such decisions and criticism is much easier with hindsight.
But now I am concerned how the Drupal Security Team can realistically raise the level further there is another vulnerability that is as serious as Drupalgeddon. Even if they raise the alert level using language in the PSA, will people still believe them? It reminds me of the boy who cried wolf.
Of course serious vulnerabilities like these are rare events in Drupal, so there is not yet a standard to compare alert levels to.
Just arrived here? Read my followup first.
The Drupal security team announced multiple highly critical updates to Drupal contrib modules in PSA-2016-001. Expect attacks within less than one hour from the announcement; 18 hours from the time this article is published. This is probably going to be Drupalgeddon all over again.
If you are prepared, you will save yourself a lot of time. If you are late or too slow, you will probably find yourself with a lot more work, e.g. the rescue workflow for Drupalgeddon 1.
Don't skimp on the first two. And do at least one of "3. Update a contrib module" or "4. Learn how to apply patches". Which one you choose depends on your skills and how out of date contrib modules are on your Drupal websites. Ideally, do both steps 3 & 4; You might find one of them is significantly challenging for you.