Hi everyone,
Thanks for attending the webinar today and for all the great conversation. Here are the slides:
http://www.softwaretestingtrainingonline.com/cc/library/The%20Elusive%20Tester%20to%20Developer%20Ratio2014.pdf
The video will be posted very shortly.
Thanks!
Randy
Dedicated to thoughts about software testing, QA, and other software quality related practices. I will also address software requirements, tools, standards, processes, and other essential aspects of the software quality equation.
Thursday, February 20, 2014
Wednesday, February 12, 2014
Metrics for User Acceptance Testing
Recently I responded to a question at Quora.com about UAT metrics.
"What user acceptance testing metrics are most crucial to a business?"
Here is an expanded version of my answer, with some caveats.
The leading caveat is that you have to be very careful with metrics because they can drive the wrong behavior and decisions. It's like the unemployment rate. The government actually publishes several rates, each with different meanings and assumptions. The one we see on TV is the usually the lowest one, which doesn't factor in the people who have given up looking for work. So, the impression might be the unemployment situation is getting better, while the reality is a lot of people have left the work force or may be under-employed.
Anyway, back to testing...
If we see metrics as items on a dashboard to help us drive the car (of testing and of projects), that's fine as long as we understand that WE have to drive the car and things happen that are not shown on the dashboard.
Since UAT is often an end-project activity, all eyes are on the numbers to know if the project can be deployed on time. So there may be an effort my some stakeholders to make the numbers look as good as possible, as opposed to reflecting reality.
With that said...
One metric I find very telling is how many defects are being found per day or week. You might think of this as the defect discovery velocity. These must be analyzed in terms of severity. So, 10 new minor defects may be more acceptable than 1 critical defect. As the deadline nears, the number of new, critical, defects gains even more importance.
Another important metric is the number of resolved/unresolved defects. These must also be balanced by severity and should be reflected in the acceptance criteria. Be aware, though, that it is common (and not good) practice to reclassify critical defects as "moderate" to release the system on time. Also, keep in mind that you can "die the death of a thousand paper cuts." In other words, it's possible to have no critical issues, but many small issues that render the application useless.
Acceptance criteria coverage is another key metric to identify which criterion have and have not been tested. Of course, proceed with great care on this metric as well. Just because a criterion has been tested doesn't mean it was tested well, or even passed the test. In my Structured User Acceptance Testing course, we place a lot of focus of testing on the business processes, not just a list of acceptance criteria. That gives a much better idea of validation and whether or not the system will meet user needs in the real world.
Finally, stakeholder acceptance is the ultimate metric. How many of the original acceptance criteria have been formally accepted vs. not accepted. It may be the case where just one key issue holds up the entire project.
As far as business value is concerned, a business must see the value in UAT and the system to be released. Here is an article I wrote that address the value of software quality: The Cost of Software Quality - A Powerful Tool to Show the Value of Software Quality.
I hope this helps and I would love to hear about any metrics for UAT you have found helpful.
Thanks,
Randy
"What user acceptance testing metrics are most crucial to a business?"
Here is an expanded version of my answer, with some caveats.
The leading caveat is that you have to be very careful with metrics because they can drive the wrong behavior and decisions. It's like the unemployment rate. The government actually publishes several rates, each with different meanings and assumptions. The one we see on TV is the usually the lowest one, which doesn't factor in the people who have given up looking for work. So, the impression might be the unemployment situation is getting better, while the reality is a lot of people have left the work force or may be under-employed.
Anyway, back to testing...
If we see metrics as items on a dashboard to help us drive the car (of testing and of projects), that's fine as long as we understand that WE have to drive the car and things happen that are not shown on the dashboard.
Since UAT is often an end-project activity, all eyes are on the numbers to know if the project can be deployed on time. So there may be an effort my some stakeholders to make the numbers look as good as possible, as opposed to reflecting reality.
With that said...
One metric I find very telling is how many defects are being found per day or week. You might think of this as the defect discovery velocity. These must be analyzed in terms of severity. So, 10 new minor defects may be more acceptable than 1 critical defect. As the deadline nears, the number of new, critical, defects gains even more importance.
Another important metric is the number of resolved/unresolved defects. These must also be balanced by severity and should be reflected in the acceptance criteria. Be aware, though, that it is common (and not good) practice to reclassify critical defects as "moderate" to release the system on time. Also, keep in mind that you can "die the death of a thousand paper cuts." In other words, it's possible to have no critical issues, but many small issues that render the application useless.
Acceptance criteria coverage is another key metric to identify which criterion have and have not been tested. Of course, proceed with great care on this metric as well. Just because a criterion has been tested doesn't mean it was tested well, or even passed the test. In my Structured User Acceptance Testing course, we place a lot of focus of testing on the business processes, not just a list of acceptance criteria. That gives a much better idea of validation and whether or not the system will meet user needs in the real world.
Finally, stakeholder acceptance is the ultimate metric. How many of the original acceptance criteria have been formally accepted vs. not accepted. It may be the case where just one key issue holds up the entire project.
As far as business value is concerned, a business must see the value in UAT and the system to be released. Here is an article I wrote that address the value of software quality: The Cost of Software Quality - A Powerful Tool to Show the Value of Software Quality.
I hope this helps and I would love to hear about any metrics for UAT you have found helpful.
Thanks,
Randy
Monday, February 10, 2014
Tester to Developer Ratio Survey
I have a new survey posted for the Tester to Developer Ratio topic. If you have 5 minutes to answer it, I would appreciate it!
http://freeonlinesurveys.com/s.asp?sid=zikft6dtg1mueho417711
http://freeonlinesurveys.com/s.asp?sid=zikft6dtg1mueho417711
Sunday, February 09, 2014
Free Webinar - February 20 2014 - The Elusive Tester to Developer Ratio
You are invited to attend a FREE 30-minute (or so) webinar on Thursday, February 20th, 12:00 CST.
Since 2000, I have been researching the question, "What is the recommended ratio of software testers to developers?" I have written two articles on that topic, with the original article, "The Elusive Tester to Developer Ratio" getting over 30,000 hits on my web site and being cited in many other articles and books.
Since 2000, I have been researching the question, "What is the recommended ratio of software testers to developers?" I have written two articles on that topic, with the original article, "The Elusive Tester to Developer Ratio" getting over 30,000 hits on my web site and being cited in many other articles and books.
This is an important metric, but it also raises other important questions, such as:
- What if your needs are different from "average"?
- Is this metric really the best way to plan the staffing of a test organization?
- What are other, perhaps better, ways to balance your workload?
- How can small test teams be successful, even in large development organizations?
I will also present up-to-date research.
To sign-up, just go http://www.anymeeting.com/PIID=EA52DA86864F3C (You will get automatic reminders beforehand.)
There are limited slots available, so be sure and sign-up and show-up early to reserve your place. (The last time we had a completely full session.) We will be recording the session and post it a little later on my YouTube channel.
Feel free to pass this invitation along to a friend!
I hope to see you there!
Thanks,
Randy Rice
Tuesday, February 04, 2014
Revising an Important Software Testing Course
You would think after writing over 60 software testing courses, I would finally get tired, get bored or something. Yet, new topics interest me, so I keep going.
Another challenge is maintaining all this courseware. Thankfully, I have some friends like Tom Staab and Tauhida Parveen for their help in this effort.
The course we are revising is Security Testing for the Enterprise and the Web. The need has never been greater for security testing, and so many organizations place too much trust in their security policies and procedures. The bad guys don't give a rip about policies and procedures. They are out to exploit software defects to give them greater leverage.
As with any software testing course, the key is application. So, we are revising this course with updated examples and exercises using recent exploits as examples.
We'll have it ready soon, so stay tuned for details.
Another challenge is maintaining all this courseware. Thankfully, I have some friends like Tom Staab and Tauhida Parveen for their help in this effort.
The course we are revising is Security Testing for the Enterprise and the Web. The need has never been greater for security testing, and so many organizations place too much trust in their security policies and procedures. The bad guys don't give a rip about policies and procedures. They are out to exploit software defects to give them greater leverage.
As with any software testing course, the key is application. So, we are revising this course with updated examples and exercises using recent exploits as examples.
We'll have it ready soon, so stay tuned for details.
Subscribe to:
Posts (Atom)