An (accidental) study on the causality of static analysis

Capers Jones in his book “The Economics of Software Quality” lends support to the idea that static analysis is an effective tool for improving quality. He doesn’t directly address whether static analysis is correlated or causal of better quality.

The problem with static analysis is the open question about whether static analysis causes better quality or that teams who care about quality already are prone to using static analysis tools. While I can’t answer the question definitively, I can provide a data point.

A large organization in financial services had installed a commercial static analysis tool a couple years ago. During that time they collected lots of data from the tool but never acted on the vast majority of recommendations. In effect, the accidentally conducted an experiment about the direction of causality. The organization also had enough of a function point counting capability that they could measure productivity and functional correctness accounting for the variability in size of projects via function points.

In the absence of any action on the data from the tool, we ought to expect that applications that score better in the static analysis tool will show evidence of higher team productivity or better functional quality. In essence, static analysis should predict better quality in the absence of any other action. However, the tools didn’t. Instead, we found no evidence of a relationship between static analysis tool results and customer outcomes – productivity or quality. Now, it may have just been the tool selected, and without more experiments we can’t rule that out. But it’s one data point on whether static analysis tools make a meaningful difference to quality.

Leave a Reply

Your email address will not be published. Required fields are marked *