The Case Against 100% Code Coverage

Going for Gold Can Mean Going for Broke

Ryan Craven
Testopia

--

MidJourney Image — A chase after a questionable gold at the end of a rainbow, akin to chasing perfect coverage

Code coverage is a measurement of how much of your codebase is exercised by your test suite. Having 100% coverage means that every line of code is invoked when your tests are run. This is often held up as an ideal target to aim for in testing. However, 100% code coverage can be misleading and counterproductive.

Here are some key reasons why 100% coverage should not be the ultimate goal:

  • 100% coverage does not equal bug-free code. You can have fully covered code that still contains defects that your tests did not catch. Having high coverage numbers often makes teams feel safe and confident, when there could still be underlying bugs lurking that slipped through. A false sense of security is one of the biggest pitfalls of relying too heavily on coverage stats.
MidJourney Image — A developer staring at a screen full of green code coverage, but scratching their head at bugs
  • Chasing the last 10–20% of coverage requires significant effort for minimal gain. Getting from 80–90% coverage to 100% typically needs intricate, hard-to-maintain tests to exercise all corner cases and uncommon flows through the code. The time spent getting to 100% could…

--

--

Ryan Craven
Testopia

Sr Quality Engineer & AI Enthusiast • Writing on AI, Tech & Testing • Read my articles for free and join my newsletter: https://ryancraventech.substack.com