Friday, January 02, 2015

What Mythbusters Can Teach Software Testers

mythbusters
I hope you have all had a great Christmas and New Year. Normally at this time of year I write a posting about goal setting, reflecting on the past year for lessons learned or something similar, which are still important things to do.

I thought for this year, let's go for "Now for something completely different."

Between all the bowl games and Christmas movies on TV, the Science Channel has been showing a Mythbusters marathon for last couple of weeks. That, my friends, is a lot of episodes - more than I could ever take in. But I have enjoyed watching a few of the episodes.

I often refer to the Mythbusters show when explaining session-based testing in some of my software testing classes and workshops. Mythbusters is a perfect example of how to set-up a test, perform it, then evaluate the results.

However, in watching several episodes back-to-back, some clear trends emerged that I have not previously related to what we do as software testers.

Planning

You don't see the written plans, but believe me, they are there. Of course, you have the episode script, but there is much more going on behind the scenes. The crew has to get all the resources in place for the show. Some of the resources get expensive. In one episode I counted eight cars destroyed. Then you have explosives, the permissions to use the explosives, chemicals, the permission to use dangerous chemicals, special cameras and other devices. The list could go on, but the point is that those items, just like the items you need for testing, do not just magically appear. It takes planning.

Teamwork

It is impressive how each of the Mythbusters collaborate and get along. They may disagree on some things, but each one is committed to letting the test tell the story. They ask for input and feedback and it's refreshing to not see the drama that plays out in some of the "reality" shows.

Expert Consultation

When it comes to things beyond their scope of knowledge, the Mythbusters bring in a subject matter expert to evaluate the test results.

Measurement

The Mythbusters are fanatics which it comes to measuring what they do. Not only do they measure things, but they are careful to take good baseline and control measurements BEFORE they perform the test. If the Mythbusters can measure how much pressure it takes to get candy away from a baby, we can measure what we do in testing.

Estimation

The Mythbusters allocate certain amounts of time to their tests. That allows them to estimate the time needed for the entire test. The time for each of the tasks varies due to the complexity of the test. Setting the scope of the test is very important. The test is focused on ONE thing, not several. That allows them to stay right on point and on time.

It appeared to me the total test time broke into:
  • Planning - They discuss the urban legend to be busted or confirmed and how they plan to conduct the tests.
  • Set-up - This often takes up the largest percentage of time, depending on the specific test. In some cases, they go to very elaborate means to create the closest conditions possible to the purported occurrence of the original (supposedly) event. In one show, the crew built a jail cell out of cinder blocks, poured concrete in the inner space and reinforced it with rebar. That alone would have taken multiple days.
  • Performance - The tests themselves can go quickly, but the trick is that many of the tests are repeated either to confirm the findings or to get a large enough sample size. In fact, one impressive observation is that they will try a test several different ways. They might start by using just a model, then by using the real thing, such as car, motorcycle, etc.
  • Evaluation - This often goes quickly because the test has been so clearly defined, measured and documented.
  • Tear-down - This is the part we don't see, but in the case of the mock jail cell, I could see it taking a while.
I think a starting point for testers is to measure as a percentage of effort your testing tasks require, based on varying levels of test complexity.

Expected Results

The Mythbusters define what they expect to see either confirmed or busted. In fact, they know what constitutes "confirmed" or "busted" and what the measures must be to support the findings. It's at this point the viewer gets the clear impression that these guys practice science. Personally, I also practice testing as a science, not a guessing game or a diversion. However, the popularity of the show is that they have made science (and testing urban legends) fun and entertaining. I wonder if we could do that in testing?

No Boredom

This is the challenge for testers. We get to blow stuff up also, but not in a physical sense. Let's face it, a system crash is not as exciting as a car crash (Unless the system you are testing operates a car, but we won't go there...). So, how do we make software testing exciting and perhaps even entertaining?

By the way, if you haven't seen my video of testing a mobile phone radar app during a tornado, that was exciting! But, like the Mythbusters say "Don't try this at home."

The best answer I can give for this is to start thinking about testing software like the Mythbusters think about their tests. Create a challenge and build some excitement about confirming or busting ways that the software should work. You may not get to see a literal explosion, but you may make some people's heads almost explode in frustration when they see that really bad defect (I mean showstopper, dead-in-the-water, kind of defect) the day before the release is supposed to go out.

Some companies hold "bug bashes," but in my experience, most of the problems reported are either superficial or not actually a problem. The issues are often misunderstand of how the application should work. In one case, we held a bug bash at a company with customer service reps performing the tests. Over 200 issues were logged over a 4-hour period. 3 of the issues were actually confirmed defects!

Contests are fun, but be careful. Don't make the contest about who finds the most bugs. You will get a lot of bug reports and some of the better testing may be the tests that find few, but important, defects.

Perhaps a contest for the "Bug of the Week" or "Most Humorous Bug" (Hint: error messages are great places to look for humor) might be interesting. I would love to hear your ideas on this. I would also love to hear any things you or your company does to make testing exciting.

How's that for your New Year's challenge?

Now, go bust some of your own software myths!