Is traditional Software Quality Assurance (SQA) dead? Or...does it just suffer from poor perception and bad practice? I hope it is the latter.
I started thinking about this question after hearing a presentation recently that highlighted the problems with traditional SQA. The more I think about it, the more I believe we still need true SQA (not just testing). If you don't know the difference, please read on.
QA and QC
First, we must understand that true QA is not testing and it is not a verb. So, to say “then we QA it.” is like saying “then we configuration management it.”
SQA focuses on how a process is performed and is the management of quality. SQA can encompass metrics, process definition and improvement, testing, lifecycle definition, and so forth. The SQA function may perform some tests and reviews (quality control or “QC”) but unless there is quality management, the effort can easily become haphazard and uncontrolled.
Software testing is QC. So are reviews and inspections. The key difference is that the focus is on the product to find any defects. SQA and QC must work hand-in-hand to be effective.
SQA is process assurance, that is, assurance that the process is being performed as designed. QC is product assurance, or assurance that the product meets specifications. So, both activities are needed.
New vs. Old
I've been developing and testing information systems for over 33 years now. There's always something new that people think will change the way everyone develops software. Think about all the approaches and methods that have come down the pike – from waterfall to agile. Why do people still use the waterfall? Why are some people ardent evangelists of agile?
I think much is explained by comfort zones and culture. People tend to use approaches they are comfortable with. People don't like to change.
However, people do like to be fashionable. New approaches are fashionable which gives them early adopters who become enthusiastic supporters. After all, when was the last time you saw someone excited about the waterfall approach?
Process-orientation vs. Product-orientation
Back in the 70's and 80's, one of the big issues was that much focus was on the software, not on how it was built. So, the famous quote was “If builders built buildings the same way programmers write code, the first woodpecker that came along would destroy civilization.”
In 1985, I was performing eXtreme Programming, it just wasn't called that. I worked in tandem with another programmer, developed my tests first, then coded to them, and worked from user stories. The problem was inconsistency. We were the only two working this way. There was no one in management that wanted to spread the technique. In those days, like today, waterfall was king. Why? Because the waterfall model can be explained in about 10 minutes or less.
Software project consultants looked at this state of affairs and concluded that the process has a great deal to do with creating software and that software development should be an engineering effort, not an art or a craft. Hence, the title, “software engineer” and following years of creating process models, such as the Capability Maturity Model (CMM). The CMM was very process-centric. In fact, the original version didn't have a key process area for software testing. The idea was that if the process was performed correctly, testing would not be needed.
The flaw in the process-only idea is that people are not perfect. Therefore, there will be mistakes at every step in building software, all the way from concept through system retirement. The only way to find the defects caused by these mistakes is to detect them by an effort designed to find them. A filtering approach where defects are screened out by inspections along with early and ongoing testing is a very effective approach.
The good part of the process focus is that it does typically deliver a better product with fewer defects injected throughout the project life cycle. This has been proven by organizations with high levels of process maturity who also measure defect detection percentage. (See chart.)
Indeed, processes provide a valuable framework to organize and perform all other project tasks. In short, processes can be improved, they can be shared and they can be trained.
SQA is a key mechanism by which improvements are made. In organizations it is common to find pockets of both good and bad practice. Improvement is rare, which caused people to see the need for processes to begin with. A few years back, Lewis Gray wrote a great article for Crosstalk Journal entitled, "No Hypoxic Heroes, Please", in which he makes a compelling case for software processes using the example of why mountain climbers follow processes and standards – which is to keep from making bad decisions when their minds start to become oxygen-deprived and the ability to reason is impaired. We see the same thing on software projects, especially as the deadline looms closer and closer.
Bad SQA Practice
It is possible to take any effective tool or approach and apply it in an ineffective way. Some organizations have built the SQA function into a bureaucracy which slows projects down and adds little value to the organization. In fact, defects may even increase due to people spending so much time performing paperwork. This was never the objective of SQA, “old school” or in any context.
Other organizations have turned SQA into a police force which investigates, audits and regulates software projects. The intent is noble, but the rest of the project lives in fear of the SQA team and what they can do to them.
In some organizations, SQA is a gatekeeper. To get software into production use, it must get QA approval. Once again, this is a negative view of what SQA is intended to perform. In reality, implementation should be a team-based decision based on risk.
The average lifespan for a SQA group is about two years. That's because after about two years, senior management asks, “What do these people do?” Unless the SQA team can show tangible and positive value, the decision is likely to be made to try something else to improve software quality. All too often, the SQA team's work is seen as intangible and paper-shuffling.
Back in 2000 I gave a keynote presentation at QAI's testing conference in which I made a major point that if QA and test organizations must be in alignment with business objectives and project objectives, or else they will be marginalized and most likely eliminated. In other words, the QA and test teams must find the right balance of finding and preventing problems, as compared to not stopping progress.
I believe the only people who can direct that balance are the business stakeholders. These are the people who live with the level of software quality, or at least know what they want the business's customers to experience.
What Can Really Be “Assured”?
Not much, in my experience. We can't guarantee perfection since we can't test everything. Likewise, we can't assure a process has been perfectly performed. Even if a process is perfectly performed, the process itself can be flawed.
This leads me to my final point which concerns processes as performed professionally versus those performed in a factory setting. In a factory, you want everyone doing the same thing in the same way. You do not want any variation at all.
However, the more professional the effort and the person, the more you can rely on their expertise to do the job expertly. There are many examples of this, such as great chefs knowing how to prepare great meals without a recipe, or great doctors not reading the process book as they perform surgery. However, what is not seen, are the rules, standards and protocols each of these professionals adhere to. For example, the chef cooks food to pre-established proper temperatures to prevent food poisoning. The surgeon has a team of people following an exact checklist to make sure all preparation is correct before the surgery and all surgical instruments are accounted for after the surgery.
In software development we rely on many people to get it right. All the way from concept to delivery, developers, business analysts, architects, testers, DBAs, trainers, management, customers and others must work together in professional ways. There may or may not be a formal software life cycle followed.
When things work well, no one seems to think much about software QA or testing. It's like the air conditioning - no one gives it a thought until it breaks down. However, when the product being delivered starts to slip and the customers start to leave, then management starts thinking “Maybe we need some structure in place to make sure we do the right things in the right ways.”
Smart QA
QA done well and done smartly, can be a very helpful activity. The problem is when people don't match the QA approach to the business and project context.
Instead of doing some basic root cause analysis and finding the true source of the problems, which can be addressed and prevented, some companies embark on major pushes to install a new QA program, SDLC or both.
Instead, how about some simple steps such as basic processes, checklists, guidelines and re-designed tests? Then, after seeing how those work, we can make further adjustments.
Conclusion
I don't think true QA (not testing) is dead, but I do think it suffers from bad practice and poor perception. I also believe there is a pendulum effect which swings from one extreme to another. In the case of software, the pendulum swings from “no process” to “all process”. Right now, we are at the “no process” end of the swing, with movement toward the center.
QA is a function that each organization must decide how best to adopt. Some will reject QA entirely, some will adopt it smartly and some will adopt it inconsistently or with bureaucratic approaches.
I hope you apply QA smartly. If you need help in doing that, call me!