Friday, December 27, 2013

Google's Hummingbird, Target misses the Target, and the Healthcare Site that Still Doesn't Work

As a software tester, webmaster and business owner, I learned early on that 1) I would have to get good at being online, 2) get good at being found online and then 3) be very good at serving customers well online. When I first learned about Search Engine Optimization back in the late 90's and on until (almost) present day, everything was about keywords.

In case you haven't heard yet or noticed (which is understandable because Google didn't make a big deal of it), Google just installed a new search engine called Hummingbird, not just a tweak like the recent Panda and Penguin. This is a new V8 engine dropped in the car.

I first noticed something was up when my site ( dropped from the #3 position for "software testing training" to #19, then #23. The sites now at the top (with the exception of SQE, who you would expect the be there) all rank low in the individual things that USED to count, like keywords, backlinks, etc. (I think I'm back up to Page 1 again.)

I started reading the SEO blogs and learned about the significance of Hummingbird, which Google says is minimal with over 90% of sites unaffected. I find that contradictory (why make such a big change if only 10% of sites are affected?), but that's another article for another time.

The point is that Google says it has noticed (perhaps with help of an unnamed government entity) that people are forming their search queries in different ways these days. They cite mobile users as an example. Let's say you are walking down the street and want to get a pizza, so you ask Siri "where can I find pizza in downtown Chicago?" (OK, you probably don't need to ask Siri for that information, but hang with me here.)

So I tried a test with the query, "Tire Chains". I got results for where to buy tire chains, how to apply tire chains, which types of tire chains are best - all on page one. So, the challenge is clear. Millions of sites available, so how do you cut through the clutter. It's the "long tail effect", essentially.

One of the other things that appears to be a criteria for doing well with Hummingbird is to have a good social site presence. My friend Mickey O'Neill has a great blog post about why a business needs to be on Facebook, even though you may not think you need to be on Facebook. A lot has to do with how the search engines use social sites to give authority to web sites. (That's another thing Hummingbird links - sites that are authoritative.)

You may not have a web site and you may not care about how Google ranks sites, but it affects you anyway. That's because if you use Google like most people do, you will have to change how you phrase your searches. You will probably start phrasing them in the form of questions. (If you want a laugh, perform a Google search for "What is Hummingbird?") If you test web sites, I would suggest making SEO tests a part of your test suite, even though you may have people already doing that.

The most sobering thing about all this is that basically, everything written about SEO before December 1, 2013 is now obsolete due to Hummingbird. I'm not saying SEO is dead, but with keywords playing a such a minor role, not the focus is on quality content and links. more thing to think about is that Google knows where you have been on the web (along with the unnamed government entities), so they will use that also to select the results they think you want to see.

My prediction is that there will be much more "tweaking" to come on Hummingbird.

OK, now on to some other big topics, briefly....

Boy, oh boy. The data breach with Target stores will have some big ripples. Now it is know that PIN numbers (encrypted) were also snatched. Let's hope the encryption is strong. When I heard that cause had been found and fixed, I kind of chuckled to myself, "Breaking News. Horses Stolen, Gate is Now Locked."

 "The attack began Nov. 27, the day before the Thanksgiving holiday, and continued until Dec.ember 15, making it the second-largest data breach in U.S. retail history. The largest breach against a U.S. retailer, uncovered in 2007 at TJX Cos. Inc., led to the theft of data from more than 90 million credit cards over about 18 months."

For those of you who may be keeping score the cost of the TJX data loss was over $250 million.

To me, the lesson is that data theft occurs at may levels and now it is up to the individual to monitor accounts closely. The bad news is that law enforcement is not equipped to chase down the crooks, so the stores and banks have to absorb the losses, which eventually get passed on to you and me.

I'll have more on this later when some of the facts emerge.

Speaking of facts emerging...

I've been holding back on the whole debacle until I could find a point of entry to even make comments. The story still evolves daily. Clearly, this is going to be the classic software project failure story for years to come. I don't think lack of testing was the problem. I think this was classic government procurement meets clueless project management and sponsor oversight, all combined with a fixed date deadline.

The sad part of this story is that people's healthcare is at stake. The performance issues are just the tip of the iceberg. Data security is high-risk, as is data interoperability with insurance companies. I'm sure there will be other shoes to drop in this story.

Oh, and this just in...

"The news traveled like wildfire across Facebook and Twitter — a computer glitch had triggered unbelievable Delta ticket prices on the airline’s website and other travel sites. Among the bargains: roundtrip tickets to Hawaii for $6.99, a first-class flight for $12.83 from Oklahoma City to St. Louis and a $132 fare from Houston to San Francisco, again first-class. Cory Watkins, a travel agent in Oklahoma, told CNN that he’d paid $1,387.38 for 12 flights for himself and clients for first-class trips all over the U.S., saving thousands of dollars.

Perhaps the most unbelievable part of all is that Delta is going to honor the fares. Airline spokesman Trebor Banstetter couldn’t say how many tickets were sold during the fare glitch — which occurred during part of Thursday morning — but said Delta would allow the flight tickets to go ahead."

That, my friends, is the high cost of software defects. The trend does not seem to be getting better. If anything, it seems like each new day brings new stories of software failures. I think the future is bright for software testers. Now, I need to go finish my macro to continually search for "cheap air fares to Hawaii." It will send me a text message each time it finds a fare of $20 or less. :-)

Saturday, November 09, 2013

Software Testing Training

Since I do a lot of training in the field of software testing and QA, I reflect often about what makes training "stick". I also think about how many issues we face on projects are knowledge-based and experienced-based.

A few years back, I commuted weekly from Oklahoma City to San Francisco to consult on a long-term testing center of excellence project. My hotel was near the Apple Store and I had "spare time" in the evenings, so I decided to avail myself of the free training offered on how to use common Mac applications. I was amazed as some of the functionality that was not obvious in the applications. Attending these classes reinforced to me that we all need training.

We can flail around, experiment and generally waste time and money, trying to figure things out on our own...

Or...we can take some time and learn first about what we are doing.

We in IT have a disease. I call it the "dive right in" disorder. We tackle problems even before we know what they are. Therefore, we can fail to apply the most effective solution.

Software testing has so many facets, that training is required on a continual basis. Two days a year doesn't cut it.

If your company doesn't pay for it, you owe it to yourself to find a way to self-study or invest in your own training. There are more options now than ever. Even buying a $5 used book on testing can yield big results.

Of course, I can help as well. We have all kinds of free tutorials on my YouTube channel, as well as software testing e-learning and other courses.

If you are a test manager, check out some of my in-house software testing courses for your team. If I can help give any advice, just contact me from my website at

Tuesday, October 22, 2013

Principles Before Practice - Utah QA Group

Thanks to everyone that came out to hear my presentation tonight at the Utah QA group!

Here are the slides in PDF format.  I'll be posting the video soon on YouTube.



Tuesday, October 01, 2013

Obamacare, Where Art Thou?

There are so many things that come to my mind about the Obamacare launch today.

1) That so many state exchanges that experienced problems make me wonder if none of these IT shops have heard about performance testing? Failover servers? Load balancing? In a way, this was almost engineered (my apologies to all engineers) to fail because it's the "everybody show up at the same scenario".

2) There were reportedly functional defects in some states that prevented people from even setting up a user account.

3) Once again, the idea prevailed that just because someone in government declared "Let there be a system for...", people assumed the resulting  system would be on schedule, adequate quality, etc. There are no magic IT wands. But, on the other hand, how hard is it to build a web site that is just a directory to other sites? Of course, I'm just the consultant looking in from the outside. I've seen simple problems grow into complex monsters once vendors and government meet.

4) Then, of course, there are the flaws in the requirements concerning rate calculations.

I'm glad my state of Oklahoma opted out of building its own exchange.

I will be surprised if the problems are resolved quickly. I've seen these situations before and the more people try and fail to get access, the more they keep trying. It's a death spiral of performance.

Maybe, the people from United Airlines and the various state exchanges could get together and we could all have free insurance!

******* Update *******
In USA Today ( we find the following:

"U.S. Chief Technology Officer Todd Park said the government expected to draw 50,000 to 60,000 simultaneous users, but instead it has drawn as many as 250,000 at a time since it launched Oct. 1." and "These bugs were functions of volume,'' Park said. "Take away the volume and it works.''

So, it appears that one contributing factor to the "bugs" (I would suggest this is system failure, not just a "bug") is that the performance targets were set way too low. This is like the infamous Victoria's Secret online fashion show failure at halftime of the Super Bowl a few years back. In performance testing of new launches, you have to take into account the curiosity factor. In the case of Obamacare, you tell 300 million people that a certain day is the day to check it out and expect only 60,000 people to show up? Come on, guys, you have to set your sights higher than that.

This is a great lesson in performance testing. You always go for high numbers for big launches (Like the Facebook IPO). Unless, of course, you want to go the public apology route.

Sunday, September 15, 2013

When Defects Go Big Time

I remember a conversation with a test manager many years ago who worked for a major airline. This gentleman told me about the great challenge of ensuring ticket prices are correct. He said that if the price is off by just a few dollars, millions of dollars can be lost in a day - and this was in the 90s!

I'm sure by now you have heard about the free tickets issued by United Airlines recently. Those of us in testing are thinking, "Man, I'm glad that wasn't on my watch."

Regular readers of this blog and my newsletter know that I don't attack companies over public defects. I prefer to use them as learning experiences and let people make up their own minds.

Sometimes in class case studies, we make a list of risks. One of the risks is often "bad PR" or "loss of credibility/trust". I often tell clients the two places they don't want to wind up are on the front page or evening news. It's hard to measure that kind of impact.

In the recent United case, there are measurable costs, as United as decided to honor the bookings.

El Al Airlines had a similar defect on August 8 of last year. According to reports immediately after the defect, the airline said it would honor the fares. But, a day later they walked that decision back. They did eventually honor the prices.  "In all, about 5,000 tickets were sold before the error was fixed. El Al blamed an outside contractor for the mistake." (

A few years back when I was working with one of the big travel websites, they had a defect in the currency conversion rates. People were booking rooms in the U.K. that could cost up to $1,000/night for a penny per night. The company only honored those which were part of a package deal because the people could rightfully say they didn't know there was an obvious problem. I think they honored something like 4,000 reservations!  (By the way, the big travel websites have a huge data quality challenge because they depend on the vendors to provide pricing.)

At the end of the day, it often becomes a PR decision. There have also been legal cases on these kind of problems where the case is based on whether or not the website is an order entry system or not. Now, if you or I book the wrong dates, they will charge us a fee to change the ticket. 

However, in researching this article, I did discover that, "In January, the Department of Transportation enacted a new regulation to help protect consumers when they’re buying airline tickets. It states: 'The seller of the air transportation cannot increase the price of that air transportation to that consumer, even when the fare is a mistake.' But that regulation has never been tested in court. So as far as consumer rights attorney Brian Bromberg is concerned it’s still really up to the passenger to take action." (

Back in 2003, Sheraton had a similar defect where they sold $850/night hotel rooms in Bora Bora for $85.

Sheraton chose to take the PR hit and not honor the bookings. "Over two days, 136 people booked 2,631 rooms at the cheap rate and some made multiple reservations covering more than two months of vacation, Starwood says. If all the reservations were kept, the glitch would cost the resort $2 million."

In the above referenced article you will also see this little factoid, "United Airlines has had several glitches on its that let some passengers pay $25 for San Francisco-Paris flights and, more recently, $5 for Chicago-Denver flights. In each case, United honored the cheap fares." And, there have been other United pricing defects, such as in July of 2012, tickets to Hong Kong for $40.

"It's deja vu all over again," to quote Yogi Berra.

The other side of the argument is like if you or I to the store and an item scans for .01, chances are either most people would not feel right about paying the incorrect price. Plus, the cashier would probably call the manager and they would take 10 minutes to find out the right price.

We don't know the reason yet for the recent United Airline ticket pricing defect, so I can't say much beyond speculation. I would love to see the root cause analysis. I hope United tells the public the cause much like NASDAQ did on the Facebook IPO performance defect.

The part that troubles me is that system defects of all sorts are becoming a pattern with the airline industry. From scheduling systems to ticketing systems and website problems, the stories are almost expected. My real concern is when the safety line will be crossed. I predict it will happen. With today's "systems of systems", there are extremely high levels of system integration. These systems are very difficult to test, to say the least.

I was on a flight once that was delayed because the database on the plane wouldn't work. The mechanic came on board with a CD to fix the problem! Avionics are one thing. The integration between systems is another.

The one lesson I know for sure is that software defects can get expensive, either in direct losses or intangible losses in image and confidence. People are getting used to minor defects in software, and we know there will always be bugs. But just like in Jurassic Park (I recommend the book over the movie), we need to be very careful. Some of these defects can grow into monsters.

I would love to hear your thoughts on this one!


Friday, September 13, 2013

Software Testing Master Class and Testing Mobile Applications - Salt Lake City, UT

I'm excited to announce two events in October in Salt Lake City:

Software Testing Master Class (Advanced 4-day workshop) October 21 - 24, 2013

This is a unique session that is project-based and covers advanced topics in software test management and software test analysis and design. We will be learning by testing an actual project in four days.

Click here for more details.

Testing Mobile Applications (Full-day tutorial)
Friday, October 25, 2013

Mobile applications are not only the future, they are here, now, and need to be tested. The big question is "How?"  In this tutorial, bring your mobile device(s) and we will explore a framework for testing mobile applications, look at some of the tools and generally get a view of the mobile testing landscape. Click here for more details.

Thursday, September 05, 2013

Book Review - Peopleware, 3rd Edition

Peopleware cover
Click to buy on

I have been a fan of this book from the first edition in 1987 because it brings weight to the human factors in computing. Peopleware, first edition, caused me to think about the relationship between workspace and productivity.

Unfortunately, these “people issues” are prioritized at the bottom of the stack in IT. However, most people in the trenches know that people make or break what we do in IT both long-term and short-term. The most critical and chronic problems are not technology-related, they are people-related!

A minority of managers fully understand the impact of people in IT projects. The rest of the management population tends to treat people like interchangeable components that can be located anywhere in the company and become instantly productive.

This is one of those books that you wish your manager would read and adopt. The problem is that too often, the management solution of people issues is reorganization or layoffs. A few rare and valuable companies that do value people and their long-term value have learned that people require time, care and feeding to be productive.

The value in the third edition of Peopleware is that DeMarco and Lister have had about 25 years to validate the insightful book they originally wrote in 1987. For sure, a lot has changed in the workplace since the 80’s, especially the IT workplace. Cubicles come and cubicles go, and some dysfunction is very much the same. The third edition clarifies many key issues in short and concise chapters that not only point out the problems, but also offer solutions. The third edition is definitely a value-added update to a classic.

One insight in particular I took away from the book is the long-term cost and effectiveness impact of employee turnover. Some companies seem to totally ignore this impact. DeMarco and Lister start with a learning curve assumption of three months for a role with moderate complexity. This is in addition to the existing experience and knowledge a person might have. The curve can be from six months to two years in some companies, which places the net capital investment at $200,000 per person.

So when a valuable person (like yourself) leaves a company, most managers won’t do what it takes to keep you, or even to fix the issues after you leave. Instead, they continue to pay out this cost without even knowing the actual costs incurred. And this doesn’t even include the cost of delayed projects, mistakes made by the replacement person, etc.

In case I haven’t made the point - In my opinion after 35 years in the IT profession, this is the one book I think every IT professional should own, read, and hopefully, apply.

Wednesday, August 21, 2013

Principles Before Practice

I've been thinking about the role that principles play in the context of software testing. As I was sitting with a test analyst awhile back consulting in a test method, I could see that there were quite a few variables that had to be considered.

It reinforced to me that good test design is a very nuanced thing. It's more than just "Step1, Step2, ..."

Many times, people get frustrated because they do things without understanding the rationale. People may learn a new test technique, try to apply it, and fail because they didn't really understand the nuances of the situation they are in and how they impact the technique they are using.

I'll never forget what someone told me a long time ago about trying to teach someone a skill. They used the analogy of washing dishes. This person said, "I can either teach you how to wash every kind of item in every situation, or I can teach you the principles and let you figure out the rest." I thought the second option sounded reasonable.

There are many ways to wash dishes. Even with appliances, there are some principles that really help. I know because I violate them sometimes and have to repeat the entire load. Things like:

  • Rinse off the big stuff first.
  • Save the really messy dishes until the end so you don't get everything else in the sink messy too.
  • Use hot water, but not too hot or else you will scald yourself.
  • Be careful with sharp knives in sudsy water,
  • You get the idea...

In testing, there are some similar principles, like:

  • Take some sample tests early and find where the big problem areas seem to be.
  • Don't test the really complex areas at first. Get your bearings first.
  • Have strong tests, but if you make every test strong, you may not have time to finish.
  • Early testing is good, expect when the thing you are testing isn't ready even for early testing.

The reason that principles come before practices is because they build understanding of WHY something is done a particular way. Without the WHY the WHAT can become meaningless and wasteful. See, there's the principle behind principles.

So the next time you are conveying your testing knowledge, be sure to convey the principles first.

Keep on testing!


Wednesday, June 19, 2013

The Ever-Expanding Nasdaq Performance Crash Costs

Keeping track of the stated costs of the Nasdaq performance problems during the Facebook IPO of May 18, 2012 is like trying to contain Gorilla Glue - it just keeps growing. This may be the largest single software defect cost in history. The total to date is $72 Million.

First reports were that Nasdaq had set aside $40 million to pay investment firms for their losses, then the number increased to $62 million on March 25, 2013.

Most recently, the SEC announced an additional $10 million fine on May 29th, 2013. Mention was also made of $2 million paid to investors, but in this meatloaf pile of money, I'm not sure if that was part of the $62 million or in another pile somewhere.

One of the best accounts of what happened is here:

This account shows that not only was there a technical problem causing severe performance issues, but alleged securities laws violations resulting from its "poor systems and decision-making".

From the CNBC story, "The SEC said several members of Nasdaq's senior leadership team convened a "Code Blue" conference call at the SEC's request and, thinking that they had fixed the problem by removing a few lines of code, chose not to delay the start of secondary market trading in Facebook shares. But they had not grasped the root cause, the SEC said. The decision to resume trading without fully understanding the problem resulted in violations of several rules, according to the SEC, including Nasdaq's own rule governing the price/time priority for executing trade orders." (emphasis mine)

There were also numerous other rules broken by Nasdaq, which are detailed in the CNBC story.

This is a sad story of: 1. the impact of poor performance in the real world, 2. how just one or two lines of code can bring down a system and cause extensive losses, 3. how making a decision without knowing the true (root) cause can make things even worse, 4. how the cost of the impact of a software defect (not a bug or a glitch) can be extraordinarily high and continue to grow due to lawsuits, fines and penalties.

Unfortunately, the real losers in all this are the individual investors who are still underwater in their initial investment in Facebook.

In this story from NBC (

"Facebook's IPO on May 18, 2012 was marred by technical glitches that left the market makers — who facilitate trades for brokers and are crucial to the smooth operation of stock trading — in the dark for hours as to which trades had gone through.

Nasdaq said it tested its systems before the IPO but the testing did not reveal the "design flaw" that caused the glitches."

I hate it when that happens!

Glitches???  That is system failure due to defective code. The story did say "dark for hours." A glitch is a momentary occurrence, but sounds better than it really is. Maybe if we called these failures what they really are - defects, that would start to convey some seriousness of the situation.

Here is an interesting quote from George S. Canellos, co-director of the SEC's Division of Enforcement, "This action against Nasdaq tells the tale of how poorly designed systems and hasty decision-making not only disrupted one of the largest IPOs in history, but produced serious and pervasive violations of fundamental rules governing our markets."

Now, the question is, "Will lessons truly be learned form this?"

Friday, June 14, 2013

A Little Distance Helps You See the Details?

It sounds odd, but sometimes to see details, you need some distance.

Yesterday I was at the pharmacy (for those of you in the U.K., the chemist...) to pick up a prescription. I also needed to get a small jar of Mentholatum.

There were rows and rows of items just for colds and flu!

I wasn't seeing Mentholatum anywhere. As a tester, I realized I was in a "needle in the haystack" problem. I was about to give up, then I noticed the line at the counter had only one person in line, so I got in line.

While I was waiting, I looked back at one of the rows of shelves and from about 20 feet away started to scan each shelf. Then, I spotted the small jars on the bottom shelf, almost hidden from view.

Standing one foot away, I couldn't see the lower shelf at all.

It reminded me of a time when in college, I had an engineering professor who would always tell us, "You are too close to the problem. Step back. Think about it from a distance." Most of the time that was exactly way we couldn't solve the problem!

It's the same with testing. Many times, we don't see defects because we are too deep in the details. We need the bigger picture sometimes.

Just something to think about...

Have a great weekend!


Monday, June 03, 2013

ISTQB Foundation Level Training - Miami/Ft. Lauderdale, FL, July 16 - 18, 2013

We don't conduct many public courses, but are going to host an ISTQB Training event in July in the Miami/Ft. Lauderdale area (closer to Ft. Lauderdale). If you have been thinking about getting certified in software testing, here's your chance!

There is a 10% discount for groups of 3 or more. Plus, the price includes the cost of the exam ($250 value).

The instructor will be Dr. Tauhida Parveen, an authorized ISTQB trainer and author of two testing books. (And a great trainer!)

For more information and to register, click here:

We hope to see you there!

Friday, May 31, 2013

Paul is Definitely Alive!

Last night I had the opportunity to check off a bucket list item - getting to see Paul McCartney perform live in concert. It was an amazing show flawlessly performed by 5 guys who can rock. Click here for the review.

At age 70, McCartney is an inspiration to me. Talk about energy!

I was contrasting this concert to the Eric Clapton concert I went to a couple of months ago. Don't get me wrong - I really like Eric Clapton, too. He's one of the greatest guitarists of all time. But...he mainly played, took a bow, did an encore, took another bow and left. Very little audience interaction.

At last night's concert, I felt like I was at a party jamming with old friends. Paul told funny and touching stories. He expressed his empathy at the losses recently in the tornadoes. I thought "Let it Be" was the perfect song for that. It was also a classy thing to do to come back on the 2nd Encore, waving the Oklahoma Flag, with someone else carrying the British flag.

I know that discussing the relative talent and appeal of artists is very subjective. All I can say is that McCartney entertained. Big time. And the audience wanted more, even after 2 encores.

To quote from the review, “You have been a fantastic crowd here tonight in Tulsa, Oklahoma, but there does come a time when we gotta go home … and there comes a time when you gotta go home, too,” the rocker cautioned the crowd after delivering the weirdly wonderful combo of “Yesterday” and “Helter Skelter.”

As I was just letting it all soak in on the 2-hour drive back to Oklahoma City, I was thinking that I have fans in what I do, and you have fans in what you do.  If you are reading this as one of my fans, this will have meaning to you. I don't see things like a lot of other consultants and trainers see them. I have a special "sound" you might say.

So, play to your audience and let the others go find the sound they like. No need to change your sound to please them. Not everyone will like your sound. In fact, some people will like some of what you do, but not all of what you do. That helps make you better; it helps make me better!

By the way, I have a recorded presentation on How to Be a Software Testing Rock Star at

Some takeaways:
  • Know what pleases your audience and give it to them.
  • Give a little extra
  • Leave them wanting more (that came from Will Rogers)
  • Be gracious and kind
  • Have fun
  • Rock it loud!

Until next time...


Wednesday, May 29, 2013

Where Do You Keep Your Lessons Learned?

I'll never forget a conversation I had with a client one day. It was just before we were both going to make a presentation of assessment findings and recommendations to senior management.

I asked her if she felt senior management would be willing to take action on the recommendations. "Oh yes," she said. "They don't want another Project X (fictitious name) to happen." I realized I had missed something in my interviews.

"Project X?" I asked.

"Yes, it went over budget by $1 million dollars and was a big black eye on our company," she said.

"Why was that?"

"Lack of solid test practices, people not communicating well....all the thing you have there in your PowerPoint deck," she said.

This got me thinking about something we all know and discuss, namely, "lessons learned." We mention them as if they are in a book like the President's Book of Secrets.

However, I think that is seldom the case.

My belief is that most of the lessons learned are in people's heads, which makes sense. So are requirements, test criteria and a lot of other things. Not a great storage method, but it is reality for many.

I'm not even proposing there should be a book of lessons learned. However, I think they should be reflected in practices, processes, systems, whatever you use to guide work.

One thing  I have found inexpensive and effective is a team knowledgebase. Whether it be a wiki or some other method, at least it is accessible.

Of course, I must also mention the evil twin of lessons learned, "lessons not learned." This one is painful and we all are well-acquainted with it. Imagine the Homer Simpson facepalm and you have it.

It is frustrating because then we are left asking "Why don't we learn from the hard knocks?"

Here is my short list of reasons:

1. Lack of reflection - We don't regroup and reflect after a major event, so we never identify the key learnings, even as individuals.

2. The pain is not great enough - It's like "death by a thousand paper cuts" so we ignore the event and move on. I have often said that there is nothing like a good disaster to get management's attention.

3. Lack of leadership - A good leader and coach knows how to remind the team "Don't do that!"without being a jerk. Good leaders also knows how to fix the process to prevent the mistakes from happening again.

4. Short memories - Time can heal a lot of wounds. Five years from now, Project X, might not seem so bad. However, a new Project X could be devastating. Remember when we blew up the moon (just kidding!)?

By the way, the lessons learned don't have to be your own. In sifting through the rubble of the Moore tornado, I have found several pictures I have turned into the picture lost and found effort. Hopefully, they will be reunited with their owners.

I am now on a mission to take digital pictures of all my prints, DVD copies of all videos and put them in a bank storage box, just in case. I am taking a video of all our personal property "just in case." Oh, and I'm also working on getting a storm shelter installed. Two F5 tornadoes close by in 14 years are too many for me!

I would like to hear how you and your team handles "lessons learned." Leave a comment and let me know.



Friday, May 24, 2013

Test Profession Survey Results Preview

First, thanks for your thoughts, prayers and concerns after the tornado this week. I helped a friend go through the rubble of her former home on Monday. It was amazing to see her positive attitude. As we arrived at the debris, she said, "Welcome to my humble abode."

While there, she was interviewed by a Brazilian TV crew who also remarked to me how impressed they were with her attitude. Yes, that is inspiring, and many others here are also holding up well even in trying times.

 Also, thanks for those of you responded to the test professional survey last week. I ended up with 100 responses. I need a while longer to write the article (and perhaps white paper) on this, but I thought you might want to see some early results.

I am working on the article and should have it by next week. To me, the early impressions are that (at least to those that responded) the majority of testers see themselves as professionals and it is important to have that view. Management may not see testers as professionals as much, but there is still a significant percentage of managers that do find importance in the view of a test professional as opposed to some other view. I think this has some larger, and very good, implications for those of us in the field of software testing.

More to come soon. Of course, I would love to get your thoughts on these findings.

For those in the U.S., have a great Memorial Day. Remember those who have given their lives for our freedom. Remember those who are rebuilding in Oklahoma.

Thanks as always for being the best at what you do!


Monday, May 20, 2013

Update on May 20 Tornado

Hi Folks,

Thanks to everyone who has contacted me concerning the tornadoes here today. Thankfully, we were spared, barely.

I was at a client in OKC to attend at 2:00 meeting, which was canceled. It was looking stormy, so I decided to head home. About when I arrived home, the tornado sirens started to sound.

As the tornado approached, I was on the back porch watching it come directly toward us. Just like May 3rd, 1999 all over again. So, I hustled Janet and our two pugs into the car and drove north. We don't have a storm shelter (I think we will soon). The tornado took a turn toward the east, so no damage at our place. But, the loss of life and damage is terrible, especially the children at the school.

(This is a short video I shot from our car. You can see the tornado moving from the right of the screen to the left behind the Homeland store.)

Both our sons and families are fine, although had Ryan (our oldest) and his family not moved 6 years ago, they would have been wiped out and our oldest grandson would have been at the school where the 7 children died.

I don't what it is about Moore. It's like a tornado magnet. It's just really bad here right now. Your prayers are needed.

The people here are tough and resilient. We will rebuild and go on, but the loss of life is the worst part. 51 lives lost as of this writing.



Update: May 21

Here are some more pictures:

This is the path of the tornado:

Here is what I was looking at on radar on my iPad when I decided it was time to bug out:

Here is a pretty dramatic picture after the first tornado. This is a second lowering. This didn't form into a tornado:

Friday, May 17, 2013

Help With a Survey on Software Test Profesionals

I am working on an article about software testing as a profession. This is a controversial topic to some people and I would like to get your thoughts.

I have created a very short (10-question) survey that will help me write this article. It will only allow 50 respondents, but I would like to invite anyone to participate at:
I will share the results and also let you know when the article is out.

Thanks for your help!


Wednesday, May 01, 2013

Thoughts on the ISTQB Open Letter

Since it is hard to elaborate a response in 140 characters, I am blogging this. First, full disclosure, I am on the board of directors of the ASTQB and I am a training provider of ISTQB certification courses, as well as over 60 other courses I have written. (I make a small percentage of my income from ISTQB training.) I was also a founding author of the CSTE test certification from QAI. This is from my own perspective and does not represent the views of the ISTQB or ASTQB.

Second, I really believe as testers we have the right and responsibility to ask questions. So my issue is not with the asking of questions. We can still be friends.

I tweeted yesterday that "The open letter to the ISTQB is meaningless." Here's why.

1. The letter fails to distinguish between the ISTQB and the country examination boards (such as the ASTQB, Canadian Testing Board, UKTB, etc.) that write and administer the exams. Therefore, the ISTQB does not write exam questions. To get the answers to the questions about the validity of questions and coefficients, you would have to write such a request to each individual country exam board. The ISTQB may provide a high-level response, but it can't answer the detailed questions posed because it simply doesn't have the information.

2. There are Non-disclosure Agreements in place to protect the intellectual property of each country board and the ISTQB in general. One of the challenges with any exam (ITIL, PMP, etc.) is to keep the contents of the exam confidential as to prevent questions from being passed around. There is a sample exam available on the ASTQB website that gives a flavor of the questions being asked. Kryterion would not release results because they also have confidentiality restrictions.

3. "Have there ever been any problems with the validity of the exams?" This is like asking, "Are you still beating your wife?" Everything has shortcomings. The reason the ASTQB invested in independent exam reviews and measurement was to make the exam as valid as possible.

I agree that the questions on the exam must be a valid reflection of the learning objectives in the syllabus.  As a training provider, I don't know what is on the exam. There is a firm line of separation. I focus on the methods and application as opposed to the brain cram approach.

That's it for now. Gotta get back to the test lab.



Wednesday, April 10, 2013

ISTQB Foundation Level Training Coming to Atlanta - June 5 - 7, 2013

We don't conduct many public courses, but are going to host an event in June in Atlanta. If you have been thinking about getting certified in software testing, here's your chance!

The instructor will be Dr. Tauhida Parveen, an authorized ISTQB trainer and author of two testing books. (And a great trainer!)

For more information and to register, click here:

We hope to see you there!

Thursday, January 31, 2013

Notes from Today's Webinar - Test Data Strategies for ICD-10 Testing

Here are the notes from today's webinar on test data strategies for ICD-10 testing.

Download slides in PDF format.

In this session, Dexter Oliver, Randy Rice and Dr. Tauhida Parveen outlined the importance of test data in ICD-10 testing, and some ways that the test data challenge can be addressed. Getting the right test data is one of the most challenging aspects of any type of testing, but ICD-10 poses unique challenges, such as:

  • Which diagnosis codes will need to be included?
  • Which business conditions will need to be supported?
  • How much data will be needed?
  • Where will the data be obtained?
  • How will resources be allocated to create test data and perform testing? 
The video will be posted shortly.



Monday, January 14, 2013

Addressing Test Data Concerns in Your ICD-10 Test Planning

 Thursday, January 31st, 2:00 P.M EST

In this session, Dexter Oliver, Randy Rice and Dr. Tauhida Parveen will outline the importance of test data in ICD-10 testing, and some ways that the test data challenge can be addressed. Getting the right test data is one of the most challenging aspects of any type of testing, but ICD-10 poses unique challenges, such as:

  • Which diagnosis codes will need to be included?
  • Which business conditions will need to be supported?
  • How much data will be needed?
  • Where will the data be obtained?
  • How will resources be allocated to create test data and perform testing?
We invite you to join Dexter, Randy and Tauhida in this free and informative session to get the right test data in the most efficient ways.

Register Here: