Being involved in software quality means having a lot of conversations with quality assurance folks about the test process. In a recent conversation, we were discussing the use of risk based testing.
The idea with risk based testing is to identify those items which are high risk/high value to the customer and test them first. That way, you confirm that the most important functionality works before you move on to the less important stuff. Doing risk based testing doesn’t necessarily mean you won’t test something, but if push comes to shove and something has to give, it’s the low value stuff that won’t get the attention.
At any rate, one of the activities that I often advocate for is auditing. Auditing is completely non-value-added work, but it is necessary to assure that people do what is expected of them. A grim reality as it may be, the cliche “you get what you inspect, not what you expect” holds true.
As I was discussing auditing for proof of risk based testing being applied, a particular tester pressed me on the topic saying “we can’t always do the high value cases first because sometimes we have to do a low value case in order to set up for what we really care about.” Essentially, their point was that we should not expect people to test according to the risk value because other overriding reasons would get in the way.
Of course, unhappy with that answer, I proposed, that you do a low value case followed by a high value case. By setting up for one test and then doing the high value case, you will at least run some high value stuff as early as possible.
This didn’t seem to sit well, and as I prodded for more information, I figured out why. In order to do the high value cases in this person’s example, we were setting up to run a batch job. Thus, we did lots of low value cases so that we could batch a bunch of high value stuff together.
It was convenient for the tester, since it minimized task switching for them, but it was also decidedly batch-and-queue. No one feature was really getting tested and instead we were doing a huge setup activity in order to batch process a bunch of stuff.
It’s not at all uncommon for software to run in batches (though I think we can apply LEAN principles not only to the software process but the software as well – do stuff real-time instead). But, if you set up 50-100 tests and there’s something wrong with the configuration, it all becomes wasted effort. A single test, run through the entire process, would quickly and cheaply uncover that something was amiss before you wasted your effort on the remaining 99 cases.
And, if you spend all that time doing setup only to fail everything, not only do you create massive rework for yourself, but you create late-breaking defects for development to fix. All of these are undesirable, and all of them can be avoided. That is, as long as you are willing to trade some of your own personal convenience for targeted, more immediate feedback on whether it’s even worth pursuing the remaining suite of tests.