The real benefit of 100% code coverage

No, it isn’t only to be proud of ourselves!

--

It’s normal today to test your codebase. Any company whose business is based on a technical solution has implemented an automated testing solution.

We’re all in agreement on that. Now the question is: “What should we test?”

In this article, we are not going to talk about the different types of tests. We’re not talking about the difference between unit testing, integration testing, or end-to-end testing. Instead, we’re going to talk about a much more controversial topic: test coverage.

And why a 100% code coverage is a good idea you should set up in your team.

Isn’t it beautiful?

Good coverage does not imply good tests

Before starting, let’s be clear.

This is the most common point from 100%-coverage detractors: a covered code does not mean it’s well tested.

It is better to have 50% of the project tested correctly, than 100% poorly tested.

This is correct.
It’s useless, even counter-productive, to write tests only to write tests.

In this article, when we speak about covered code, we assume it’s correctly tested code.

This argument, even if it’s valid, is off topic: as a reminder, the question here is “What to test?”, not “How to test?”

Now that’s clear, we can go back to the initial topic: 100% code coverage.

Code coverage below 100% doesn’t lead to questioning

Let’s imagine a project in which the code coverage limit is set to 80%. When running the test suites, all is green and the actual code coverage is 85%. Perfect!

Now imagine we need to develop a new feature. The developer in charge does their best and writes some tests.

Despite these new tests, the code coverage drops to 84.5%.

What’s the meaning of this lost half-point?
The new feature is probably not fully tested, but is it something to worry about? To know more, we need to analyse updated files; but some code already wasn’t fully covered, making the task harder. This research time is not motivating since, in any case, everything is green for the CI!

With a code coverage limit set to 80%, this information that coverage has dropped slightly is likely to be lost. Losing 0.5% is really not that much and, once again, all tests passed and the CI says everything is OK, without any warnings.
So why bother?

A coverage set below 100% leads to two things:

  • loss of information;
  • less accountability.

A 100% code coverage is a safety net

Let’s take the same situation, but this time with a code coverage limit set to 100%. The new feature is developed the exact same way with the exact same tests, only the code coverage limit differs.
After the development of the feature, the global code coverage still decreases, but this time the new value is 99.8%: everything is red and the merge is blocked.

If tests are missing, the warning is crystal clear.

The warning is triggered, and we must therefore analyze why the coverage is no longer at 100%. Here, two cases are possible:

  1. We have forgotten to test an important scenario;
  2. It’s something we consider useless to test.

In the first case, we are glad to have been warned. We write the missing test, code coverage goes back to 100%, everything is green again and our application is more robust.

In the second case, rather than writing a useless test, we prefer at OpenClassrooms to ignore the corresponding lines.

Since everything is supposed to be green, files not fully covered are clearly visible.

Don’t hesitate to ignore your code!

The easy shortcut, when speaking about 100% code coverage, is to think we must test every if and every optional parameter.

No!

We have to use common sense, and only test what needs to be tested.

A 100% code coverage does not mean that 100% of lines are covered, but that 100% of the code which must be tested is actually tested.

At OpenClassrooms, this leads to multiple /* istanbul ignore next */.

Without this option to skip code, the 100% coverage rate would be counterproductive.

Of course, the idea is not to ignore everything. When we add a new ignore in the code, we have to justify it when reviewing the pull request. When a piece of code is ignored, it’s always explicitly and intentionally ignored.

So yes, our “100% code coverage” is a falsy 100%.
We assume it.

Thanks to this “fake” 100%, we know that everything important is tested. We receive warnings when the coverage is not perfect, and ignored lines are clearly marked as such.
In short, we don’t lose any information.

So, when do you start?

That said, setting the coverage limit to 100% is not the same challenge in all projects. It’s obviously easier to set up on new or small projects. It’s another story on older or bigger projects – it can even be inappropriate.

At OpenClassrooms, on the frontend side, the 100% code coverage is set only on our small libraries, like our collection of UI components or our API client.
The main website, which will soon turn 10 years old, is not concerned by 100% coverage and will not be any time soon, because of all its legacy code.

As always with our job, this is a matter of pragmatism. There are no rules to blindly follow, we just have to do our best according to our needs and constraints.

In short: be pragmatic!

--

--

Adrien Guéret
Product, Experience & Technology @ OpenClassrooms

Front-End developer, working at OpenClassrooms. Also Nintendo enthusiast :)