It’s time we cleared something up about testing. It is a misconception that I fear is more commonly held than it should be, and it usually begins with “finding defects in testing is good, because we won’t find them in production.” That statement is true. However, it implies something that is not true, which is that you can test in quality.
Here’s the thing, testing is about 35-50% effective per test type (unit, functional, performance, etc.). If the code is good, testing is 35-50% effective. If the code is bad, testing is STILL only 35-50% effective. That means, that if you find a lot of defects in test, there are even more to find that will make it to production.
So, for example, let’s say you had two teams code the same functionality and they both delivered code to you. You run the same set of tests against the code and find that team A had 100 bugs and team B found 10 bugs. The teams fix all the bugs you found and you retest to make sure that none of the tests you had are subsequently broken. Is it fair to say that team A’s code and team B’s code is now effectively equivalent?
I’ll give you a hint: no. Given that you had a fixed set of tests to run, and didn’t adjust that due to the quality of team A’s code, your testing does nothing to fix the code you didn’t adequately exercise. In a real project, the same is true. If you write a set of test cases in advance of receiving the code, and the code quality is poor, unless you devise additional tests to increase coverage, one can assume the un-exercised code is of poor quality as well. Thus, you will let more defects into production.
Testing is not, let me repeat, NOT inversely effective to the quality of the code. You don’t suddenly get magical results from testing just because you delivered bad code to testing. You get the same percentage results from testing, and let more bad code through. This is why you cannot test quality into the system.