multipart-mixed

Quality: Beginning vs. Afterward

Yes, PragProWriMo 2009 was over yesterday. However, I can’t introduce the topic of quality and then just stop there, can I? After this it’s time to start working on my book proposal.

There are two approaches to building a quality product: build it in from the beginning, or beat it in afterward. The former approach requires a lot of discipline and effort from the engineering team. The latter requires a lot of testing and, again, effort from the engineering team.

Afterward

Let’s talk about beat-it-in-afterward quality first, since that’s how it’s usually done. It’s implicit in the waterfall development method that dominates industry: design, build, test. Test comes last. Testing isn’t the only way to get to quality, but it gets the lion’s share of attention.

Your company’s test department looks at the product specification (if there is one), writes a plan for how they’ll test the product, and then spends a tremendous amount of time executing the plan. The product usually blows up quickly. It goes back to engineering, they fix some bugs, give another version to the test department, that blows up in some other way, and so it goes back and forth for many months (even years).

This cycle is known as text-fix. It’s extremely common and is never completely avoidable. Test-fix is detrimental, however, in a number of ways:

  • It’s impossible to know when the product will be good enough to ship. One bug may be hiding five more that you don’t know about. And half the time, fixing one bug creates another. It’s common for a product to enter test and for the bug count to go up over time rather than down.

  • The fixes applied during test-fix usually don’t take the long view—they’re just enough to get the bug closed. Therefore the product design devolves into a mess of fixes on top of fixes.

  • When it comes time for the next release, your fixes-on-top-of-fixes code base becomes increasing hard to work with, making development harder and more costly, and that resulting product usually has more bugs than the first one had.

There’s one additional step you can add to test-fix that improves the situation: refactoring. “Refactoring” is a fancy word for improving the structure of code without changing its outward-facing functionality. This can be as simple as renaming functions to make more sense, to more complicated changes like reworking an object hierarchy to better match its problem domain.

Refactoring is not common, however, and is actively discouraged by some engineering managers. The perception is, “if the code works don’t mess with it.” The flaw in this logic is that there’s a difference between code being just good enough to ship and actually being quality code.

However, there is a very real danger to refactoring: can you be confident that you won’t break something? If 50% of bug fixes create a new bug, 50% of non-bug fixes will create a new bug, too. Without some kind of safety net, refactoring is asking for trouble.

From the Beginning

What if, instead of testing the product when it’s done, you test the product before you create it? This may seem like a question equivalent to “what is the sound of one hand clapping?” but stick with me for a moment. When you set out to write some code, you have an idea of what it should do. So, starting with a small chunk, write test code that verifies your expectations of what the application code should do. Then write the application code and make sure it passes the test.

This is called Test-Driven Development, popularized by Kent Beck in his book of the same name. The big-picture idea, however, goes back much further. For decades industry has tried to create software requirements that, when expressed in sufficient detail, would allow the program to be created automatically. Experts in artificial intelligence sought to achieve this goal and, lately, outsourced projects have tried the same thing. Whether using AI logic or cheap labor, the implementation should just take care of itself, right?

Turns out that plan doesn’t work out so well. As mentioned earlier, at the start of a project, you simply don’t know what you don’t know. What’s different about test-driven development is that you don’t try to specify every requirement for the program at the outset. You start with a couple things you know. You learn something from that. Then you take what you’ve learned and build on that. The tests, with the application, grow organically.

Testing, Trust, and Toyota

When you build a program with automated tests, you can finally trust that the program works. You may still have some bugs—writing good tests is a learned skill and takes practice—but you can expect a tenth (or less) of the bugs of untested code. The tests serve you when first writing the code, and also give you the safety net to refactor with confidence.

Furthermore, automated tests change the design of the program as you write it. The tests force you to think about the abstractions in your program and the interactions between components. Your program becomes far more modular because such testing requires modularity.

Then comes the real magic: when you have extremely modular code that you trust, you can start reusing code. Code reuse has been an elusive goal of the programming illuminati, both in industry and academia, for years. Most assumed it could never be done, because most code is too entangled with other code to allow for proper reuse.

Let’s revisit the auto industry for an example. I drive a Toyota FJ Cruiser, a weird looking retro-modern sport utility vehicle. Toyota created this as a concept car in 2003 with no intent of actually putting it into production. Reaction to the concept was so strong that Toyota decided to ship it, and in 2006 the production FJ was on the streets.

Unlike most concept cars turned into production vehicles, the final FJ Cruiser looks almost exactly like the concept. Toyota tweaked a few things, ran it through all the certification testing, and shipped it with confidence. How could they do that? Easy for Toyota: the only “new” parts of the FJ were its skins.

Toyota builds modular components and tests the hell out of them. They have teams that build engines, frames, suspensions, and so forth. When they do a new car, they pull the pieces they need, almost Lego-style, from a bin of proven components. The FJ, for example, uses the same frame as the Land Cruiser Prado, a mid-size variant of the Land Cruiser that’s not sold in the USA. They already know that frame works, even in harsh off-road conditions, so they have confidence that the FJ will work well off-road. And indeed it does.

Going back to our own industry, Amazon does much of the same thing with their service-oriented architecture (SOA). They build modular web services—for example search, ratings, or suggestions—and any page that needs one of those functions just opens a HTTP connection to that service. As a result, Amazon’s web site has tremendous stability (thanks to testing) and agility (thanks to modularity).

Exploration

The one part I’ve left out is what happens when you can’t write a test because you don’t know what the result should be. This happens when you’re blazing a new trial—sometimes you just need to explore through the woods to figure out where the trial should go. Go explore with wild abandon and don’t worry about the tests.

Then throw that code away. Exploration is one thing, production-quality code is another. The purpose of exploration is to discover the lay of the land and where your trail should go. Use that education to create the tests that drive the production code.

Comments

Hi Josh,
I hope this great idea is still alive.

Thanks

Post a comment