Wednesday, January 06, 2010

Tester to Developer Ratio Initial Research Findings

Thanks to everyone who provided data for my latest research project on tester to developer ratios. This topic has been an interest of mine for over ten years, when I did my first surveys on tester to developer ratios. The results of that survey and my thoughts at the time are in an article called, The Elusive Tester to Developer Ratio.

This short article is to document early findings and I plan to continue surveys and data gathering, so if you did not get in on the first round of surveying, I would like to hear about your ratios.

The participants of the recent survey were subscribers to my newsletter, The Software Quality Advisor, and the audience at my StarWest tutorial on Becoming an Influential Test Team Leader. There were 53 respondents in all, mostly from North America, but six were from Europe and one was from Asia.

I asked four questions:

1) How many developers are in your organization?
2) How many testers are in your organization?
3) On a scale of 1 to 6, where 1 is poor and 6 is super, how would you rate the effectiveness of your current ratio?
4) Do you have any anecdotal information about how your current ratio effectiveness?

The leanest ratio was twenty developers to one tester (effectiveness rating of “two”), while the richest ratio was fifteen developers to eighteen testers (effectiveness rating of “four”). There was one anomalous response of four developers to zero testers (The effectiveness rating on that one was “three”). The average ratio was 4.52 developers to one tester. The most common response was three developer to one tester (six responses), the next most common was 2.5 developers per tester (five responses). There were twenty-six responses with developer to tester ratios of 3:1 or lower.

Here are some of my initial observations and comments:

1) The responses varied greatly.

For those looking for an “industry norm” of developer to tester ratios, this may show that the range of workable ratios is wide. Effective testing may be achieved by better practices, tools and leveraging developer-based testing instead of having more testers.

2) Over half of the responses were at the “richer” ratios.

The average effectiveness reported by this group (3:1 or less) was four – above average. Interestingly, the average effectiveness for the higher ratios was three – average, and not a huge difference from the lower ratio group.

3) In the higher ratio group, there were some with higher than average test effectiveness of four or five.

This tells me that you have a higher ratio and still be effective at software testing. Put another way, the magic of good testing may not be in the ratio of developers to testers.

I have always questioned the idea of using developer to tester ratio as a way to staff or estimate testing efforts. Sheer body count is just not enough information to base testing effort upon.

That said, I think developer to tester ratios may be a helpful metric to understand the workload in a test organization. For example, if I were presented with a situation where the developer to tester ratio is ten to one, I would ask:
  • Are any test automation tools being used? If so, how effective are they?
  • How much responsibility do developers have in the testing process?
  • Is testing based on risk?
  • Are test optimization techniques used in test design?
  • What is the defect detection percentage (DDP)?
  • Are defect trends tracked and studied?
  • Have the developers and testers been trained in software testing?
  • Is there a defined testing process in place and being used?
These questions would help determine how the balance and effectiveness of the testing process. Before making team sizing decisions on numbers of people, it may actually be better to use the developer to tester ratio as a metric to guide the testing process.

I did this on my first job as a test manager. I had a team of three people testing the work of thirty developers. The ten to one ratio told me that we could not test all the work coming our way.

We had no tools, just our wits. So, we developed a strategy:

1) Get management to lead the way to make the message to developers that testing is part of their job
2) Train and mentor each developer to be a good tester
3) Test the high risk changes at the highest priority
4) Test anything a developer asked us to test (unless there was no documentation)
5) Do not test anything without a defined user requirement
6) Use cause/effect graphing and other test optimization techniques to get the most testing from the fewest tests
7) Build a robust and repeatable test data set for manual regression testing

The result was that 1) we kept up with the workload and 2) the error rate went from 50% of changes with defects to 2%. At this point, we still had a ten to one developer to tester ratio. This may work for you, too. If it does, please send your check made out to Rice Consulting Services at P.O. Box 892003, Oklahoma City, OK 73170. :-)

I hope this information helps you understand your own ratio a little better. If you would like to contribute your own ratio to my data, just reply to me here with the four items:

1) How many developers are in your organization?
2) How many testers are in your organization?
3) On a scale of 1 to 6, where 1 is poor and 6 is super, how would you rate the effectiveness of your current ratio?
4) Do you have any anecdotal information about how your current ratio effectiveness?

Thanks!

6 comments:

William Echlin said...

I like the strategy for dealing with environments where the tester to developer ratio is very high. The point about getting the message to developers that testing is part of their job is valid in all teams.

In my experience it's all to easy for developers to get into the mind set that they can just chuck the code at the test team and they'll pick up any problems.

Continually banging the drum about developers being responsible for their own testing and training the developers to test well sounds like a very smart move. Maybe the same approach could be applied to the business analysts and/or architects. These guys might need different types of test skills (seeing as they are working at the requirements capture and design stage), but it still holds true that bugs found at this stage save a lot of effort later on. And after all quality is the whole teams responsibility not just the test teams!

William Echlin
<a href="http://www.softwaretesting.net/blog>www.SoftwareTesting.Net/blog</a>

Randy Rice said...

Hi William,

Thanks for your comments! I think the key in my situation (and probably in most situations) is to have management making the message that testing is everyone's job.

It seems to be a recurring question about "Why don't developers do better testing?" Lee Copeland writes that it like asking "Why don't kids clean their room?"

As testers, we can harp and moan all day, but until it becomes part of the process and gets rewarded, not much will change. Also, as a former developer, it takes getting time, tools and training.

I like the agile approaches of testers and developers working side-by-side, and Test-driven development. Those practices have done a lot to reinforce the idea.

I also often say that there are some tests, like structural unit tests, that only developers are equipped and able to perform. If they don't do these, no one else will.

I agree this can apply to BAs, architects and others. Everyone needs to know that they have a role in testing.

Thanks!

William Echlin said...

I couldn't agree more with what you've said here. This whole post has really hit a nerve with me.

I think the reason why is because in my early days as a tester I used to get a real buzz out of building a good relationship with the developers. Being the "tester who's a poor relation to the developer" I found it a fantastic challenge to prove to the developers that, technically, I could be as smart as they were.

In comparison, I've come across some test managers that think it more healthy to create an abrasive relationship between the software testing and software development team. I've never been convinced of this approach mind you. It's certainly not line with the agile approach you mention here.

Your post takes it the next couple of levels up and explains how/why it's important to continue to nurture and build a good relationship as you go up through the hierarchy of the whole team. And no where is that tester/developer relationship more important than the environment where you have a high developer to tester ratio.

No doubt the tester to developer relationship is one which will continue to fascinate me.

Thanks for your insight.

William Echlin
www.SoftwareTesting.Net/blog

Randy Rice said...

Hi William,

It sounds like we are in violent agreement! In the Top Ten Chellenges of Software Testing book, I tell the story of my first test manager job. The CEO told me he wanted me to "tear up the new system to make it better." I took it a step further and started tearing up the developers as well. Before long, we had to call a truce and find ways to work together instead of against each other. Like you, I have seen senior managers try to foster the adversarial mindset to get a better product. I have never seen it work with good results. It always leaves a culture that is distrusting and fighting.

Thanks for your comments!

Randy

Michael Larsen said...

Hello Randy!

Interesting post and one that has given me much to think about. In my organization, the test ratio is 8:1, and that's because I'm the sole dedicated tester in my company (we're small). To this end, it absolutely requires our developers to make an emphasis on ding early testing of their code, because I cannot do all of it myself (though I certainly try).

We are in the infancy stage of gearing our products to more test criven development, and I'm in the process of getting more automated testing in place, but when you're the lone gunman, oftentimes getting time to automate tests is a precious commodity. I'm fortunate in that I do work with a good dedicated team of developers that do want to improve their processes and quality, and don't look at me as point of friction in that process (I frequently emphasize that I'm more of a "beat reporter" rather than the all powerful "guardian of all things quality"... that has helped foster a much less divisive relationship).

Always enjoy your comments and perspective, and look forward to more podcasts and articles in the future :).

Michael Larsen

Randy Rice said...

Hi Michael,

Thanks for your comments and positive words. To me, what you've said is very important. It's a matter of finding a way to be effective in the ratio you are at. It's great that you are not seen as the friction point. That's huge. It's when QA and test are seen as the roadblock that it's at risk. Hang in there! Oh, and thanks for the comment about the podcast. I'm planning to resume those very soon.

Randy