Thursday, February 26, 2015

Big Sale on e-Learning Courses in Software Testing!

For a limited time, I am offering discounted pricing on ALL my e-learning courses, including ISTQB Foundation Level and Advanced Level certification courses!

Just go to http://www.mysoftwaretesting.com to see the savings on each course.

Remember, as always, you get:
  • LIFETIME access to the training
  • In-depth course notes
  • Exams included on the certification courses
  • Direct access to me, Randy Rice, to ask any questions during the course
Hurry! This sale ends at Midnight CST on Friday, March 20th, 2015.
Questions? Contact me by e-mail or call 405-691-8075.

Friday, February 06, 2015

How Important is Credibility to Software Testers?

I have incorporated the importance of credibility in all my software testing training courses. Why? Because if people don't trust you as a person, they won't trust the information you provide. As testers, the end result of everything we do is information. That information must be timely, correct, and objective.

When teaching the importance of credibility, I always say "Just look at the current events to see how people have lost or gained credibility. You will find examples everywhere."

The issue of credibility came to mind in recent days as I have been watching the response unfold to the new revelation that Brian Williams admitted to incorrect reporting of an incident that occurred while embedded with troops in Iraq. Some might say Williams' account was mere embellishment of the event. Others, however, take a much stronger view and call it an outright lie. Williams has acknowledged the inaccuracy and issued an apology, which is always a good start to rebuild credibility.

In the case of Williams, the stakes are high since he is a well-respected news presenter on a major television network. This is also a good example of how credibility can rub-off on the entire team. Williams' story many well cause many to stop watching the entire network for news reporting. People may ask, "How many other reporters embellish or hide the truth?"

Now other reports from Williams, such as those from  Hurricane Katrina, have been called into question with reliable information from those who were there. Williams reported seeing a dead body floating face-down from his hotel in the French Quarter. The New Orleans Advocate states, "But the French Quarter, the original high ground of New Orleans, was not impacted by the floodwaters that overwhelmed the vast majority of the city."

I have a feeling this story will take on a life of its own. Now, there are accounts from the pilot of the helicopter saying they did take on fire, just not RPG fire. That may help clarify things, but it adds confusion to the issue which doesn't help rebuild credibility.

As a software tester, you may be tempted to overstate how many problems you have seen in a system or application, just to get people to pay attention to the fact there are problems. Don't succumb to that temptation. Instead, just show a really good example of a serious problem. If there are other similar issues, say so. As a tester, you can only report what you have seen. You can't predict what other problems may exist, even though you might be right.

In telling your testing story, the accurate, complete truth is all you need.

If you would like to read my three-part article series on How to Build, Destroy and Rebuild credibility, here are the links:

Building Your Credibility as a Tester

How to Destroy Your Credibility as a Tester

How to Rebuild Your Credibility as a Tester

I hope all this gives us all something to remember as a we report what we see in testing!

I would love to hear your comments, questions and experiences.

And...I would like to wish my son, Ryan Rice, a happy 34th birthday today!

Have a great weekend,

Randy

Friday, January 02, 2015

What Mythbusters Can Teach Software Testers

mythbusters
I hope you have all had a great Christmas and New Year. Normally at this time of year I write a posting about goal setting, reflecting on the past year for lessons learned or something similar, which are still important things to do.

I thought for this year, let's go for "Now for something completely different."

Between all the bowl games and Christmas movies on TV, the Science Channel has been showing a Mythbusters marathon for last couple of weeks. That, my friends, is a lot of episodes - more than I could ever take in. But I have enjoyed watching a few of the episodes.

I often refer to the Mythbusters show when explaining session-based testing in some of my software testing classes and workshops. Mythbusters is a perfect example of how to set-up a test, perform it, then evaluate the results.

However, in watching several episodes back-to-back, some clear trends emerged that I have not previously related to what we do as software testers.

Planning

You don't see the written plans, but believe me, they are there. Of course, you have the episode script, but there is much more going on behind the scenes. The crew has to get all the resources in place for the show. Some of the resources get expensive. In one episode I counted eight cars destroyed. Then you have explosives, the permissions to use the explosives, chemicals, the permission to use dangerous chemicals, special cameras and other devices. The list could go on, but the point is that those items, just like the items you need for testing, do not just magically appear. It takes planning.

Teamwork

It is impressive how each of the Mythbusters collaborate and get along. They may disagree on some things, but each one is committed to letting the test tell the story. They ask for input and feedback and it's refreshing to not see the drama that plays out in some of the "reality" shows.

Expert Consultation

When it comes to things beyond their scope of knowledge, the Mythbusters bring in a subject matter expert to evaluate the test results.

Measurement

The Mythbusters are fanatics which it comes to measuring what they do. Not only do they measure things, but they are careful to take good baseline and control measurements BEFORE they perform the test. If the Mythbusters can measure how much pressure it takes to get candy away from a baby, we can measure what we do in testing.

Estimation

The Mythbusters allocate certain amounts of time to their tests. That allows them to estimate the time needed for the entire test. The time for each of the tasks varies due to the complexity of the test. Setting the scope of the test is very important. The test is focused on ONE thing, not several. That allows them to stay right on point and on time.

It appeared to me the total test time broke into:
  • Planning - They discuss the urban legend to be busted or confirmed and how they plan to conduct the tests.
  • Set-up - This often takes up the largest percentage of time, depending on the specific test. In some cases, they go to very elaborate means to create the closest conditions possible to the purported occurrence of the original (supposedly) event. In one show, the crew built a jail cell out of cinder blocks, poured concrete in the inner space and reinforced it with rebar. That alone would have taken multiple days.
  • Performance - The tests themselves can go quickly, but the trick is that many of the tests are repeated either to confirm the findings or to get a large enough sample size. In fact, one impressive observation is that they will try a test several different ways. They might start by using just a model, then by using the real thing, such as car, motorcycle, etc.
  • Evaluation - This often goes quickly because the test has been so clearly defined, measured and documented.
  • Tear-down - This is the part we don't see, but in the case of the mock jail cell, I could see it taking a while.
I think a starting point for testers is to measure as a percentage of effort your testing tasks require, based on varying levels of test complexity.

Expected Results

The Mythbusters define what they expect to see either confirmed or busted. In fact, they know what constitutes "confirmed" or "busted" and what the measures must be to support the findings. It's at this point the viewer gets the clear impression that these guys practice science. Personally, I also practice testing as a science, not a guessing game or a diversion. However, the popularity of the show is that they have made science (and testing urban legends) fun and entertaining. I wonder if we could do that in testing?

No Boredom

This is the challenge for testers. We get to blow stuff up also, but not in a physical sense. Let's face it, a system crash is not as exciting as a car crash (Unless the system you are testing operates a car, but we won't go there...). So, how do we make software testing exciting and perhaps even entertaining?

By the way, if you haven't seen my video of testing a mobile phone radar app during a tornado, that was exciting! But, like the Mythbusters say "Don't try this at home."

The best answer I can give for this is to start thinking about testing software like the Mythbusters think about their tests. Create a challenge and build some excitement about confirming or busting ways that the software should work. You may not get to see a literal explosion, but you may make some people's heads almost explode in frustration when they see that really bad defect (I mean showstopper, dead-in-the-water, kind of defect) the day before the release is supposed to go out.

Some companies hold "bug bashes," but in my experience, most of the problems reported are either superficial or not actually a problem. The issues are often misunderstand of how the application should work. In one case, we held a bug bash at a company with customer service reps performing the tests. Over 200 issues were logged over a 4-hour period. 3 of the issues were actually confirmed defects!

Contests are fun, but be careful. Don't make the contest about who finds the most bugs. You will get a lot of bug reports and some of the better testing may be the tests that find few, but important, defects.

Perhaps a contest for the "Bug of the Week" or "Most Humorous Bug" (Hint: error messages are great places to look for humor) might be interesting. I would love to hear your ideas on this. I would also love to hear any things you or your company does to make testing exciting.

How's that for your New Year's challenge?

Now, go bust some of your own software myths!

Friday, November 14, 2014

Thoughts on the Consulting Profession

Sometimes I come across something that makes me realize I am the "anti" version of what I am seeing or hearing.

Recently, I saw a Facebook ad for a person's consulting course that promised high income quickly with no effort on the part of the "consultant" to actually do the work. "Everything is outsourced," he goes on to say. In his videos he shows all of his expensive collections, which include both a Ferrari and a Porsche. I'm thinking "Really?"

I'm not faulting his success or his income, but I do have a problem with the promotion of the concept that one can truly call themselves a consultant or an expert in something without actually doing the work involved. His high income is based on the markup of other people's subcontracting rates because they are the ones with the actual talent. Apparently, they just don't think they are worth what they are being billed for in the marketplace.

It does sound enticing and all, but I have learned over the years that my clients want to work with me, not someone I just contract with. I would like to have the "Four Hour Workweek", but that's just not the world I live in.

Nothing wrong with subcontracting, either. I sometimes team with other highly qualified and experienced consultants to help me on engagements where the scope is large. But I'm still heavily involved on the project.

I think of people like Gerry Weinberg or Alan Weiss who are master consultants and get their hands dirty in helping solve their client's problems. I mentioned in our webinar yesterday that I was fortunate to have read Weinberg's "Secrets of Consulting" way back in 1990 when I was first starting out on my own in software testing consulting. That book is rich in practical wisdom, as are Weiss' books. (Weiss also promotes the high income potential of consulting, but it is based on the value he personally brings to his clients.)

Without tooting my own horn too loudly, I just want to state for the record that I am a software quality and testing practitioner in my consulting and training practice. That establishes credibility with my clients and students. I do not get consulting work, only to then farm it out to sub-contractors. I don't consider that as true consulting.

True consulting is strategic and high-value. My goal is to do the work, then equip my clients to carry on - not to be around forever, as is the practice of some consulting firms. However, I'm always available to support my clients personally when they need ongoing help.

Yes, I still write test plans, work with test tools, lead teams and other detailed work so I can stay sharp technically. However, that is only one dimension of the consulting game - being able to consult and advise others because you have done it before yourself (and it wasn't all done 20 years ago).

Scott Adams, the creator of the Dilbert comic strip had a heyday with poking fun at consultants. His humor had a lot of truth in it, as did the movie "Office Space."

My point?

When choosing a consultant, look for 1) experience and knowledge in your specific area of problems (or opportunities), 2) the work ethic to actually spend time on your specific concerns, and 3) integrity and trust. All three need to be in place or you will be under-served.

Rant over and thanks for reading! I would love to hear your comments.

Randy


ISTQB Advanced Technical Test Analyst e-Learning Course





I am excited to announce the availability of my newest e-Learning course - The ISTQB Advanced Technical Test Analyst certification course.
 Register Now

To take a free demo, just click on the demo button.






This e-Learning course follows on from the ISTQB Foundation Level Course and leads to the ISTQB Advanced Technical Test Analyst Certification. The course focuses specifically on technical test analyst issues such as producing test documentation in relation to technical testing, choosing and applying appropriate specification-based, structure-based, defect-based and experienced-based test design techniques, and specifying test cases to evaluate software characteristics. Candidates will be given exercises, practice exams and learning aids for the ISTQB Advanced Technical Test Analyst qualification.

Webinar Recording and Slides from Estabilishing the Software Quality Organization

Hi all,

Thanks to everyone who attended yesterday's webinar on "Establishing the Software Quality Organization" with Tom Staab and me.

Here is the recording of the session: http://youtu.be/pczgcHGvV5Q

Here are the slides in PDF format: http://www.softwaretestingtrainingonline.com/cc/public_pdf/Establishing%20The%20SQO%20webinar%2011132014-1.pdf

I hope you can attend some of our future sessions!

Thanks,

Randy

Wednesday, November 12, 2014

Book Review – Introduction to Agile Methods - Ashmore and Runyan

Introduction to Agile Methods
Click to Order on Amazon
First, as a disclaimer, I am a software testing consultant and also a project life cycle consultant. Although I do train teams in agile testing methods, I do not consider myself as an agile evangelist. There are many things about agile methods I like, but there are also things that trouble me about some agile methods. I work with organizations that use a wide variety of project life cycle approaches and believe that one approach is not appropriate in all situations. In this review, I want to avoid getting into the weeds of sequential life cycles versus agile, the pros and cons of agile, and instead focus on the merits of the book itself.

Since the birth of agile, I have observed that agile methods keep evolving and it is hard to find in one place a good comparison of the various agile methodologies. I have been looking for a good book that I could use in a learning context and I think this book is the one.

One of the issues that people experience in learning agile is that the early works on the topic are from the early experiences in agile. We are now almost fifteen years into the agile movement and many lessons have been learned. While this book references many of the early works, the authors present the ideas in a more recent context.

This is a very easy to read book and very thorough in it’s coverage of agile methods. I also appreciate that the authors are up-front about some of the pitfalls of certain methods in some situations.

The book is organized as a learning tool, with learning objectives, case studies, review questions and exercises, so one could easily apply this book “as-is” as the basis for training agile teams. An individual could also use this book as a self-study course on agile methods. There is also a complete glossary and index which helps in getting the terms down and using the book as a reference.

The chapter on testing provided an excellent treatment of automating agile tests, and manual testing was also discussed. In my opinion, there could have been more detail about how to test functionality from an external perspective, such as the role of functional test case design in an agile context. Too many people only test the assertions and acceptance-level tests without testing the negative conditions. The authors do, however, emphasize that software attributes such as performance, usability, and others should be tested. The “hows” of those forms of testing are not really discussed in detail. You would need a book that dives deeper into those topics.

I also question how validation is defined and described, relying more on surveys than actual real-world tests. Surveys are fine, but they don’t validate specific use of the product and features. If you want true validation, you need real users performing tests based on how they intend to use a product.

There is significant discussion on the importance of having a quality-minded culture, which is foundational for any of the methods to deliver high-quality software. This includes the ability to be open about discussing defects.

To me, the interviews were interesting, but some of the comments were a little concerning. However, I realize they are the opinions of agile practitioners and I see quite a bit of disagreement between experts at the conferences, so I really didn’t put a lot of weight in the interviews.

One nitpick I have is that on the topic of retrospectives, the foundational book, “Project Retrospectives” by Norm Kerth was not referenced. That was the book that changed the term “post-mortem” to “retrospectives” and laid the basis for the retrospective practice as we know it today.

My final thought is that the magic is not in the method. Whether using agile methods or sequential lifecycles like the waterfall, it takes more than simply following a method correctly. If there are issues such team strife, stakeholder disengagement and other organizational maladies, then agile or any other methodology will not fix that. A healthy software development culture is needed, along with a healthy enterprise culture and understanding of what it takes to build and deliver great software.

I can highly recommend this book for people who want to know the distinctions between agile methods, those that want to improve their practices and for management that may feel a disconnect in their understanding of how agile is being performed in their organization. This book is also a great standalone learning resource!

Monday, October 20, 2014

The Beginner's Mind Applied to Software Testing

Something has been bothering me for some time now as I conduct software testing training classes on a wide variety of topics - ISTQB certification, test management, test automation, and many others.

But it's not only in that venue I see the issue. I also see it in conferences where people attend, sit through presentations - some good and some not so good - and leave with comments like "I didn't learn anything new." Really?

I remember way back in the day when I chaired QAI's International Testing Conference (1995 - 2000) reading comments like those and asking "How could this be?" I knew we had people presenting techniques that were innovations at the time. One specific example was how to create test cases from use cases. At the time, it was a new and hot idea.

So this nagging feeling has been rolling around in my mind for weeks now. Then, this past week I had the need to learn more about something not related to testing at all. One article I found told of the author's similar quest. All of his previous attempts to solve the problem ended up looking similar, so he started over - again - except this time with a mindset in which he knew nothing about the subject. Then, he went on to describe the idea of the "Beginner's Mind." Then, it all clicked for me.

When I studied and practiced martial arts for about ten years, my fellow students and I learned that if we trusted in our belt color, or how many years we had been learning, we would get our butts kicked. In the martial arts, beginners and experts train in the same class and nobody complains. The experts mentor the beginners and they also perfect the minor flaws in their techniques. The real danger was when we thought we already knew what the teacher was teaching.

Now...back to testing training...

I respect that someone may have 30 years experience in software testing. However, those 30 years may be limited by working for one company or a few companies. Also, the person may only have worked in a single industry. Even if the experience is as wide as possible, you still don't have 100% knowledge of anything.

The best innovators I know in software testing are those that can take very basic ideas and combine them, or find a new twist on them and innovate a new (and better) way of doing something. But you can't do that if you think you already know it all.

In the ISTQB courses, I always tell my students, "You are going to need to 'un-learn' some things and not rely solely on your experience, as great as that might be." That's because the ISTQB has some specific ways it defines things. If you miss those nuances because you are thinking "Been there, done that," you may very well miss questions on the exam. I've seen it happen too many times. I've seen people with 30+ years of experience fail the exam even after taking a class!

So the next time you are at a conference, reading a book, attending a class, etc. and you start to get bored, adopt the beginner's mind. Look at the material, listen to the speaker and ask beginner questions, like "Why is this technique better that another one?", "Why can't you do ..... instead?", or "What would happen if I combined technique X with technique Y."

Adopt the beginner's mind and you might just find a whole new world of innovation and improvement waiting for you!


Monday, October 06, 2014

Dallas Ebola Patient Sent Home Because of Defect in Software Used by Many Hospitals

Here at Rice Consulting, we have been making the message to heathcare providers for over two years now, that the greatest risk they face is in the integration and testing of workflows to make sure electronic health records are correctly made available to everyone involved in the healthcare delivery process - all the way from patients to nurses, doctors and insurance companies. It is a complex domain and too many heathcare organizations see EHR as just a technical issue that the software vendor will address and test. However, that is not the case as demonstrated by this story.

From the Homeland Security News Wire today, October 6:

"Before Thomas Eric Duncan was placed in isolation for Ebola at Dallas’ Texas Health Presbyterian Hospitalon 28 September, he sought care for fever and abdominal pain three days earlier, but was sent home. During his initial visit to the hospital, Duncan told a nurse that he had recently traveled to West Africa — a sign that should have led hospital staff to test Duncan for Ebola. Instead, Duncan’s travel record was not shared with doctors who examined him later that day. This was the result of a flaw in the way the physician and nursing portions of our electronic health records (EHR). EHR software, used by many hospitals, contains separate workflows for doctors and nurses."

"Before Thomas Eric Duncan was placed in isolation for Ebola at Dallas’ Texas Health Presbyterian Hospital on 28 September, he sought care for fever and abdominal pain three days earlier, but was sent home. During his initial visit to the hospital, Duncan told a nurse that he had recently traveled to West Africa — a sign that should have led hospital staff to test Duncan for Ebola. Instead, Duncan’s travel record was not shared with doctors who examined him later that day.

'Protocols were followed by both the physician and the nurses. However, we have identified a flaw in the way the physician and nursing portions of our electronic health records (EHR) interacted in this specific case,” the hospital wrote in a statement explaining how it managed to release Duncan following his initial visit.

According to NextGov, EHR software used by many hospitals contains separate workflows for doctors and nurses. Patients’ travel history is visible to nurses, but such information 'would not automatically appear in the physician’s standard workflow.' As a result, a doctor treating Duncan would have no reason to suspect Duncan’s illness was related to Ebola.

Roughly 50 percent of U.S. physicians now use EHRs since the Department of Health and Human Services (HHS) began offering incentives for the adoption of digital records. In 2012, former HHS chief Kathleen Sebelius said EHRs 'will lead to more coordination of patient care, reduced medical errors, elimination of duplicate screenings and tests and greater patient engagement in their own care.' Many healthcare security professionals, however, have pointed out that some EHR systems contain loopholes and security gaps that prevent data sharing among healthcare workers.

The New York Times recently reported that several major EHR systems are built to make data sharing between competing EHR systems difficult. Additionally, a 2013 RAND Corporationstudy for the American Medical Association found that doctors felt 'current EHR technology interferes with face-to-face discussions with patients; requires physicians to spend too much time performing clerical work; and degrades the accuracy of medical records by encouraging template-generated doctors’ notes.'

Today, Dallas’s Texas Health Presbyterian Hospital has made patients’ travel history available to both doctors and nurses. It has also modified its EHR system to highlight Ebola-endemic regions in Africa. 'We have made this change to increase the visibility and documentation of the travel question in order to alert all providers. We feel that this change will improve the early identification of patients who may be at risk for communicable diseases, including Ebola,' the hospital noted."