Wednesday, February 24, 2010
Here's the deal. More than ever, software is an integral part of the electronics that control cars. All software has defects. When a software defect causes a car to accelerate to 100+ mph, that's not a bug. That is catastrophic system failure. A few years back, Volvos were coming to dead stop at highway speeds because of software defects.
A friend of mine last year spent months trying to convince multiple Ford dealers that her Ford Expedition was stalling in the highway. They all said the Electronic Control Unit was fine. Eventually they found the problem - the ECU was bad.
The facts are not all in, but I would not be surprised if many of the Toyota problems are software defects.
My prediction for many years is that one day a major software failure will cause such death and destruction, that congress will start to regulate any software development with safety impact, much like is currently done in the FDA and NRC. The next big area of regulation will likely be transportation. Not that this is the answer. Regulations have fallen short because people find ways to get around them. Ultimately, quality is an ethical business issue.
For many, many years, Toyota has had a halo in terms of quality. Now that halo is gone and may never be recovered.
Keep an eye on this story.
Thursday, February 18, 2010
To see the complete schedule of all courses and pricing, click here.
If you would like to have advanced courses in your city in the USA and have 6 or more people that can attend, contact me and I would be happy to schedule a class close to you.
To learn more about ISTQB Advanced Level Certification and the pre-requisites, please click here.
Rice Consulting Services has teamed with Grove Consultants (http://www.grove.co.uk) to provide an outstanding training experience for advanced level certification.
I will be the trainer for these courses. I bring over 20 years software testing training experience in major organizations worldwide, as well as holding the full CTAL certification. The materials are licensed from Grove Consultants in the U.K. If you have seen any conference tutorials or presentations by Grove Consultants, you know the high quality and engaging nature of their presentations and materials.
ISTQB Advanced Level testers are an elite group in the USA right now, so now is the time to join that special minority of software testers and managers!
Friday, February 12, 2010
In our never-ending dedication to bring you the best value in training, we are excited to provide our students one electronic exam voucher per student as part of the registration. Each student also receives a copy of the book Foundations of Software Testing.
Our students have great success in passing the exam. Every student has direct access to Randy by phone and/or e-mail to answer any questions along the way.
Combine all this with a money-back guarantee, 12 months access to materials (even after you take the exam) and you have a great deal for preparing for the CTFL exam and building your testing skills to apply on your job.
Tuesday, February 09, 2010
It seems that so many people I consult and train want quick and easy measures from their software development or testing efforts. I wish I had that shiny silver bullet, but in my experience and observation it takes time to really get the metrics in place that works for you and your organization.
And, really, you sure don't want to make decisions based on faulty information.
Here's why good metrics take time:
You Need History
At any point in time, a measurement is like taking one frame from a movie. For it to make sense, you need to see what has happened prior to the frame. To know how things turn out, you need to see what happens after the frame.
In software, this history starts when you start taking measurements. It continues as you keep measuring and refining your measurements.
The longer the history, the more accurate your understanding of your metrics tend to be. The key word is “understanding”. Your measurements may not necessarily get more accurate over time unless you keep questioning and improving them.
Too many people want to substitute other organizations' history for their own. I discourage this practice because every situation is different. Just because one company has a certain level of success with a given level of people or type of test technique doesn't mean you will have that same level of success.
Just start measuring a few key things that are easily obtainable and that are meaningful. It's important to track measures over time with specific dates associated with them. For example, you can start measuring the number of defects found by project. While this measure alone doesn't make a metric, you can start to see trends concerning the rate of defect discovery.
You Need Context
Metrics give context to measurements. A measure is a single count or extent of something, such as the number of defects. A metric is a measure taken in context with other measures. So, the average number of defects per function tells us the defect density from the functional perspective.
It takes time to know the best way to get this context and understand what it really means to you. I was once on a project where the client wanted to know how many test cases passed during a test. Fine. That's a good thing to know, right?
Then, people started to observe that the measure of passed tests has different meaning depending on when it is measured and reported. For example, a very high percentage of passed tests in the first round of testing may indicate the tests are too weak. Just before deployment, you want a high percentage of passed tests to indicate the application is ready to release. So, you may want to have two metrics: “first time tests that pass” and “final test pass percentage.”
You Need Time for Refinement
After you measure things for awhile, you may learn you aren't measuring the right things, or you may be measuring the right things in the wrong way.
It's fine and even expected to make major re-adjustments in your measurements. Just make sure to indicate on charts and tables when the adjustments were made.
You Need Growth
Please don't start a big metrics program with dozens of measurements. These programs often fail under their own weight. They are so big and complex, people either ignore them or give up quickly. Instead, start small and grow over time.
Metrics is a discipline that takes time and attention to do right. It also takes time to gain people's trust about how the metrics will be used. Sometimes people are fearful that the measures will be used against them, such as in a performance review.
You Need Early Successes
Pilot projects or proofs of concept are great to show the value of ideas and approaches. They reduce the risk of failure and gives you a place to practice and perfect things before trying them in the larger arena where everything becomes much more visible. It's a lot better to build positive public image based on small successes that to overcome the negative image from a major public failure.
You Need Management Understanding and Buy-in
As you grow and publish your metrics, management needs to learn the great value in them for the management of projects. Management also needs to get used to the information you can provide. They may suggest or request additional metrics or changes to existing ones. That's a good thing because it shows their minds are into the effort. This only happens over time.
I hope this shows why it takes time to get reliable and meaningful metrics. There are no magic answers, tools or techniques for getting a good set of metrics in place. That's why so few organizations reach this level. It's hard work and takes time, but it's worth the effort to show the added value you bring to your company. When you are “the” person that understands the metrics, that makes you a key person not only for your team, but for your company...and that's a very good thing!
Here's the announcement from SearchSoftwareQuality.com:
We're just 2 weeks away from the complimentary virtual seminar
'Application Performance Management: Build, Bug and Lifecycle
Strategies.' Gain access to industry experts and real-world
practitioners who reveal new insights to boost app performance and
ensure it remains a key focus throughout your application development
Don't miss this opportunity - register here and mark your calendar:
TITLE: Application Performance Management: Build, Bug and
WHEN: Wednesday, February 24, 2010
TIME: 9:30 am - 6:00 pm EST
WHERE: Your Desktop
REGISTER FOR THIS VIRTUAL SEMINAR TODAY:
TOP 5 REASONS TO ATTEND THE APPLICATION PERFORMANCE MANAGEMENT
1. GET PRACTICAL INFORMATION TO USE IMMEDIATELY
Hear relevant advice that you can put into action right away to help
your organization build the foundation for high performance. No
pie-in-the-sky theories, just practical, useful information from
today's top performance experts.
2. EXPAND YOUR NETWORK OF PEERS
Share insights and discuss relevant strategies in our unique virtual
environment with those facing the same challenges you're up against
when testing and setting the stage for monitoring and maintaining
application performance throughout its lifecycle. Stop by the
networking lounge to hear success stories and lessons learned.
3. INTERACT WITH EXPERT SPEAKERS DURING OUR PANEL DISCUSSION
Discuss performance implications and testing tools for running
application components in private and public clouds. Don't go it
alone when you can benefit from the experience of these true
performance and cloud experts.
Sessions are an intensive, educational experience that will prove
incredibly valuable to your organization, but we've built in some
fun, interactive breaks during which we'll announce our prize
winners. Attend the day's seminar for a chance to win a Sony
Cyber-shot camera, an Amazon.com gift card, or a book on dependency
injection. You'll have multiple chances to win - by visiting a booth,
attending a session and just for coming to the event!
5. INTERACT WITH TOP VENDORS
Speak directly to leading performance vendors and test drive the
latest solutions with live product demos. They understand your pain
points, and strive to help you make the best IT investments for your
Ensure quality in your application performance management. Register
with one click here:
Friday, February 05, 2010
Published by Wiley, 2009, 638 pages
In the third edition, Rex extends a work he started in 1998 with the first edition. At that time, and still today, practical books on software test management are in a minority of the books on software testing. This book has become a commonly referenced work on software test management for a reason - it is practical.
Two things I really like about this book are that it is very readable and it has broad coverage of test management topics. As a trainer of thousands of test managers, I have learned that many test managers are thrust into the role with little knowledge or preparation. So, my perspective is that test managers need to know the mechanics of the software test profession as well as the managerial and leadership aspects of the job. This book delivers well on both counts.
By reading and applying the information in this book, you will learn the testing process from test planning and building the test architecture to building a test team, measuring test results, and conveying those results in a value-added way. Rex covers topics that are very relevant including test outsourcing and the context of projects and software lifecycles. I also very much appreciate the discussion of dealing with the people issues in testing.
Just a note on the use of spreadsheets and vendor non-specific tool examples is that once you understand the structure of organizing testware, you can apply that structure in a specific test tool. Plus, not everyone owns a commercial test management tool. I know many people who manage a lot of test items using Excel.
I can highly recommend this book to test managers and leaders, as well as people who aspire to be in those roles.