In the words of Michael Feathers, legacy code is code without tests. My corollary to this is that code coverage is a legacy metric — best applied to these very same legacy systems with poor, inconsistent, or absent testing practices.
You may have heard code coverage described as a negative metric (for example, in this Hanselminutes Podcast with Quetzal Bradley). That is, it can tell you what is bad — specifically, areas completely uncovered by tests — but not much about what is good. Is a code base with 95% coverage really thoroughly tested or just hitting all the blocks without considering the deeper states and interactions? This has been rehashed so many times; you may as well read Martin Fowler’s thoughts on test coverage than listen to me expound unnecessarily on those particular points. Instead, I would rather drill down into the legacy aspects of code coverage.
I see more potential in the use of broad code coverage metrics as part of a transitional strategy to improve engineering culture than as a permanent fixture. Just as a smoker might be weaned off cigarettes with the help of a nicotine patch, an engineer writing code without tests could be nudged into healthier habits by seeing enough big red zeroes in the coverage reports. But one should not expect to wear a nicotine patch for life and consider this a successful end state.
Assuming these healthy habits do emerge (e.g. adoption of TDD or similar strategies to promote a fast quality feedback loop and prevent further legacy code creep), mandated code coverage targets are at best superfluous — the team is probably tackling bigger and better things by that point. However, if removing the code coverage bar causes major regressions, you would have to assume that the proposed push away from legacy code just wasn’t in the cards. Culture change is hard.
Legacy notwithstanding, code coverage is just a tool and can be wielded usefully from time to time. One promising scenario that I have seen more recently is tracking differential coverage and providing a report of the code changing right now which has no tests. This eliminates a lot of the irrelevant information you might see in a full coverage report. There is some tooling out there like pycobertura to help provide this data but nothing I’ve seen so far that provides a really great “dev inner loop experience” (read: real-time feedback directly in your editor window).
Code coverage has its place, but do not get complacent. When used as a tactical weapon in wiping out egregious engineering debt, code coverage can be a big help. But after the legacy code is driven out, will coverage metrics still fuel any meaningful change?