Why Test Code Coverage Targets are a Bad Idea

Nick Lee
5 min readOct 4, 2016

--

Image Source: NASA JPL

Code coverage reports are an excellent tool available in most testing frameworks, and provide useful and actionable insights into your code. Over recent years, the emphasis on writing maintainable, well-tested code has spread far and wide, and automated testing has attained the recognition it deserves. Given the importance of testing, it is not unimaginable to find yourself in a scenario where your company superiors have decided that, in order to ensure the maintainability of the codebase for the future, any new code must have a near perfect test coverage level of 95% (a level which I’ve previously had the displeasure of being forced to meet). At first glance, you might think that if you have 95% coverage, you can be confident that your application is well tested…right? Wrong. Coverage does not equal quality.

The Benefits of Code Coverage Reports

Let’s imagine you’ve just installed a coverage plugin for your testing framework. After you run your test suite, your colourful new coverage report informs you that a particular feature has a test coverage of about 35%. Upon seeing this low percentage, you correctly assume that something is up and examine the code in question, only to find a rather dusty looking tests folder. You vaguely recall being under so much pressure to meet deadlines that you never got round to writing the tests, and in a moment of madness you proclaimed to the rest of the office, “To hell with TDD, BDD, ATDD and testing altogether, I’ll just get it done and write the rest of the tests later. As long as the build doesn’t fail it’s all good”. But as sprints passed you by, new stories arose and priorities shifted, some tests that should not have been forgotten were lost. Tests became history. History became legend. Legend became myth.

Luckily, the test coverage reports you just generated have come to the rescue and given you a nice little prompt to go back and sort out that test-anaemic code. It turns out there was a bug lurking in there, but fortunately it hadn’t been encountered by your users. You fix the bug, add the required tests and upon rerunning your test suite, you see the coverage for this functionality has jumped up to 88%. Balance is once again restored to Middle Earth.

Through this example we can see the way in which test coverage reports can be useful. By drawing attention to areas of code which may not be fully tested, test coverage reports highlight areas of your application which could be at risk of containing unforeseen bugs and/or are in a poorly maintained state. Furthermore, when running end-to-end tests, some test coverage reports can give less experienced team members insights into the execution path for a particular scenario which they are not familiar with. This can be helpful if the feature is complicated and the code has a large number of possible execution paths, (though in reality you’d hope they just go and setup debugging in their IDE properly). On a related note, I like to view tests as an excellent source of documentation, accurately describing what can and can’t be done in your application, and how it should respond in a range of business scenarios. Examining the execution path when a test is ran can be really helpful when getting to grips with new code.

So what’s the problem with test coverage targets?

The goal of test coverage targets is a noble one. By striving to ensure that every line of code is tested, you theoretically reduce the likelihood of a defect going into production unnoticed until an unfortunate customer stumbles across it. However, in reality you run the risk of becoming a slave to this number and writing a whole host of pointless tests that exist for the sole purpose of meeting the minimum coverage requirement. At this point, take a step back and think about why we even write tests.

Software testing is a set of processes aimed at investigating, evaluating and ascertaining the completeness and quality of computer software. Software testing ensures the compliance of a software product in relation with regulatory, business, technical, functional and user requirements — Techopedia

Tests make sure our application does what it’s supposed to do, in the way it’s meant to. They help us maintain our code in the long run as we add new features, and help others understand how the application functions. By writing tests in order to meet a test coverage percentage, you shift the focus of your tests away from their true purpose.

Test coverage results should be used for indicative purposes only, highlighting issues rather than dictating what tests to write. Bad test coverage is usually a symptom of badly tested code, but good test coverage certainly does not guarantee good code.

Let’s imagine another example. Say you are aiming for 90% unit test coverage, and you’re sat on 87%. So far you haven’t been unit testing your getters and setters, and by doing so you’ll be able to reach 90%. Is it worth it? People may share different opinions on this topic, but in my view, the answer is no. Look at the following 2 methods as an example:

public String getName()
{
return this.name;
}
public void setName(String value)
{
this.name = value;
}

Do you really need to unit test these two methods? The likelihood is your end to end tests will hit this functionality anyways, and the functionality is so basic, and so simple, that a unit test is pointless. Sure my unit test coverage might suffer, but I won’t be losing any sleep over it. If your getters/setters are doing some crazy logic, then go ahead and unit test them, but be clear on why you’re doing it (hint: it shouldn’t be for test coverage reasons). Even if you believe getters and setters should always be unit tested, the reason for holding this belief should not be based upon increasing your test coverage.

Ultimately, if someone is giving you firm test coverage targets to hit, they’ve misunderstood what software testing is about. The following quote from Martin Fowler succinctly summarises the true purpose of test coverage:

Test coverage is a useful tool for finding untested parts of a codebase. Test coverage is of little use as a numeric statement of how good your tests are. — Martin Fowler on Test Coverage

Use test coverage to identify areas which you’ve missed when writing tests, but use your judgement, skills and knowledge to ensure that you’re testing the right things in the right way.

Closing Thoughts

Test coverage is a valuable metric if used correctly. It can offer some pretty good indications of areas of code which are lacking tests and as a result are more fragile and less maintainable. However, setting strict rules around test coverage targets, especially exceptionally high targets such as 95%, will most likely lead to a bloated test suite that doesn’t fulfil it’s purpose.

Remember: A well tested codebase will have a high test coverage, but a codebase with a high test coverage isn’t always well tested.

--

--

Nick Lee

Interested in the classic mix of coding, startups, the economy, food and languages