Capers’ Jones, one of my favorite authors on metrics in IT, has often called “Cost Per Defect” a bad idea. His argument is that in situations of high quality, cost per defect will be higher. And one can see how this math is true.
If you spend $100 (1 hour @ $100/hour, let’s say) testing something and find 1 bug, the cost per defect is $100. If you spend $100 testing and find 10 bugs, then the cost per defect is $10. In this case, it’s cheaper to test where quality is poor.
Now, I am not advocating a test-only approach because it makes cost per defect cheaper. But I think this mathematical reality raises a much more important question: if quality is good, why do you continue to test?
Testing may very well be a fixed cost regardless of quality. You, in theory, don’t know if the code delivered to you is good or bad. Therefore, you write a set of tests assuming that it all needs to be covered and run those tests. If the quality is good, it appears expensive to test and if the quality is bad, then testing appears cheap. Thus, Jones argues, cost per defect encourages poor quality.
I argue it encourages something else – an opportunity to revisit whether you should be testing at all. Jones argues that the cost per defect measure is economically perverse because it punishes high quality by making testing look expensive. I’d argue that testing was never value added in the first place. Therefore, when high quality is delivered, why do you continue to waste your time testing?
Instead, I propose that cost per defect is a great measurement for a test organization. It clearly articulates the issue that testing isn’t of value where quality is high. And therefore the cost per defect is high. When no defects are found, cost per defect is undefined, and although this is mathematically annoying, it represents a measure of pure waste. Testing and finding no defects has absolutely no value to your customers.
If you think testing in quality is the way to achieve good code, then Capers’ argument makes sense. If you’re always going to test regardless of the situation, then it is an odd measurement. If, however, you think that testing should not be conducted (or at least done far less) when you know the quality is good, cost per defect drives the right behavior – less testing where quality is good.