Methods of Software Development

As a product gets larger and more complex, the development team needs to answer certain questions:

  • “When will it be done?” The rest of the company is waiting for the product to ship. They’ve got a rollout plan, a marketing blitz, perhaps manufacturing… all kinds of stuff that hinges on when the product will be done. So are we there yet?
  • “Does it work?” Complex products are hard to test. How do you mitigate risk and assure quality, but still having enough cool features?

  • “Speaking of features, can you also make it do [x]?” You’ll always get people asking for one more change. I guarantee it. When and how do you accommodate these requests?

To deal with these questions, the industry has come up with several methods for software development to answer, “we’ve got a process for that.”

Chaos (a/k/a Start-Up Mode)

One perfectly valid answer to all the above questions is, “screw you, we’re working on it.” This works in start-ups, usually cranking away eighty hours a week to get the first product out the door. It’ll be done when it’s done. It might work. The features are what we say they are.

Obviously this isn’t the most co-operative of methods. The founder of the company is probably leading the product development, and he gets to call the shots, so co-operation isn’t so important. As the company gets larger, however, chaos doesn’t cut it. MarCom needs to create the product’s web site. PR needs to book press briefings. Product managers have feedback from beta customers. At some point, everyone’s got to play nice together or the company will implode.


Someone long ago decided to bring order to the chaos, and created this logical process:

  • Write a specification
  • Write the code
  • Test the code against the specification
  • Ship it
  • Profit!

This method is called “waterfall” based on the Gantt charts used to illustrate it:

Looks good on paper, right? It’s what every project manager learns and, by golly, it’s so logical it must be good. Only problem is, it doesn’t work worth a damn in the technology world.

Problems start from the very beginning with specification: what exactly are you trying to build? Some markets have a long history of products. If you’re making sewing machines, there have been 10,000 sewing machines designed before, so specifying the design of number 10,001 isn’t exactly rocket science. But what about the market for something like Twitter? There was no market for Twitter because there weren’t any Twitters before. Twitter made their own.

Writing a program without a stable specification is problematic; how do you know you got it right? So the programmers just charge off and build something that might be close. Usually feature requests—i.e. changes to the specification—come in on a daily basis, too.

Then, when the program is all done, comes testing. The program doesn’t work. It never does. So everyone goes into bug-fix mode. Quality takes one step back for every two steps forward due to bugs accidentally introduced while trying to fix other bugs. Meanwhile, any notion of an elegant program design has long since been thrown out the window.

Then, hopefully, there’s ship. The company throws a big party. The exhausted programmers go home. The next day, support calls start coming in from customers. Need I continue?

I wish I could say “…but the industry realized its insanity and moved on.” Sorry. Waterfall is still the most common modus operandi of product development.


There’s another philosophy called “spiral” that seeks to address waterfall’s shortcomings by assuming multiple releases of the product:

It’s really just waterfall over and over again. Same rules apply. I think most project managers, even those trained in waterfall, have realized that spiral doesn’t bring anything new to the table.


The key problems in waterfall are: uncertainty of specification, change requests during development, and trying to bolt quality on at the end. Some folks decided to turn things around and assume:

  • You can’t specify anything more than about a month’s worth of work, because you simply don’t have enough information to do it accurately.

  • Since we’re re-specifying the product every month (or whatever), all change requests can wait until next month, nothing is that important.

  • Test from the beginning.

That’s agile in a nutshell. At the end of every month, you’ve got a potentially shippable product. It’s great for the engineering department, because you learn as you go what the product should do. For something that’s never been done before, you have to go somewhere first, then see if the product is making sense, then use that information and let it drive your next month.

Agile is problematic, however, for the rest of the business. Remember are key questions were, when will it be done?, will it work?, and can you also make it do [x]? Agile answers the last two questions great. The first question, when will it be done?, is answered with a counter-question: what is your definition of it? It’s hard to say when it will be done when it keeps changing.

There’s another common problem with agile: many companies try to reap its benefits but skip the “test from the beginning” part. You simply cannot hope to ship a quality product every month without comprehensive, automated testing. Yet programmers suck at writing tests. Managers suck about enforcing them. Honestly, it’s hard work but you cannot do agile successfully without tests that constantly assure the product’s quality.


Even when PragProg didnt want this book. I will. This is great!

Post a comment