Unexpected Lessons From 100% Test Coverage

What I learned testing everything

Eyas
Eyas
Jan 23 · 11 min read

Conventional wisdom in the software engineering community says that striving for 100% test coverage is a fool’s errand. It won’t necessarily help you catch all bugs and it might lead you down questionable paths when writing your code.

My recent attempts at 100% test coverage showed me the answer is much more subtle. At times I was tempted to make questionable code changes just for the sake of coverage. Some of those times, I succumbed. Yet I found there’s often an enlightened way to cover a branch and make the code better for it. Blind 100% coverage can cause us to make unacceptable compromises. But if we constrain ourselves to only making the codebase better, thinking about 100% coverage can change the way we think about a codebase. The story of my 100% test coverage attempt is a story of both the good and the bad.


Last year I came across a thread from NPM creator Isaac Z. Schlueter advocating for 100% test coverage:

Schlueter alluded to a mentality shift that a developer achieves that piqued my interest:


Road to 100


The Setup

My first real run after setting things up had a %78.72 test coverage. That’s not too bad, I thought.

These baseline tests definitely covered a lot of lines of code that they didn’t really exercise, which is part of why that number was high. That itself can be an argument that 100% test coverage is a meaningless number. Schlueter’s promise for 100% test coverage, however, is that the act of getting to that long tail can have transformative effects of how I think about my own code. I wanted to try my luck on that first hand. If we wanted to be more confident about our covered lines being truly tested, mutation testing might do more wonders than test coverage.


Happy Times: The Low Hanging Fruit

Looking at my coverage report, swaths of untested code become apparent. For example, I never attempted to generate a @deprecated class. Ok, let's fix that. And I catch a real bug that my few unit tests hadn't caught. I increased coverage by 9.8%, added some baseline tests of deprecation, and added some N-Triple parsing unit tests that I’d never got around to.

Testing my Argparse setup showed me one of my default flag values was wrong (albeit in a harmless way).


Questionable Times

Log Tests?

Intuition would have it that a log statement should not be tested. But for good and bad reasons, I decided some baseline tests on log output might be desirable.

I used that to add a log test for the multiple comments case when a unit test would have sufficed. That’s pretty questionable — but I was caught up in my zeal!

Some level of logging-based test might be defensible insofar as (1) we’re observing changes to our user interactions, and (2) it’s documenting the warnings/limitations of our code. Maybe one can view some logging-based tests as an equivalent to a screenshot diffing UI test? Or maybe I’m really trying to explain myself. Still, I felt less bad about adding a test showing a warning when parsing a common triple than I did about that comment test.

Code golf?


Constraining Myself to Neutral and Positive “Code Golf”

But what I found was that some code golf actually helped me think more clearly about the code. Even in cases where the codebase itself looked the same, I now had a new vocabulary to talk about error conditions. In some other cases, covering lines of code drove me from making run-time guarantees in the code to making compile-time guarantees in the code (definitely a positive).

Let’s walk through some examples.

Neutral: Clarifying assertions versus errors

Case Study A: Intra-function impossibility

There was a line in my code that my tests never covered. It looked something like this:

As you can see, just reading this function and knowing: (classes.size === 0) will never happen. For one, there's a classes.set(K, V) a few sections above. And we set a few other key-value pairs from this wellKnownTypes array that is hard-coded to always have a set number of elements.

One can try to understand the point of this error: It could be to show that none of the RDF triples we got are turned into classes (in which case we might want to compare classes.size with wellKnownTypes.length + 1 instead). Alternatively, it can be a poorly-placed verification for when we had less confidence that we were building classes properly, and had no clear concept of such well-known types.

In my case, creating a map with just the well-known seemed fine. If the ontology is empty or missing data, we’ll be likely to find it at earlier steps or later ones. The error gives no clear context as to what’s going wrong. So, in my case, the answer is to kill it:

Case Study B: Inter-function assertion

Another error I saw looked like this:

In this case, line 21 never happened (and the (!cls) condition was always false). This should make sense: ForwardDeclareClasses literally checks if a TypedTopic satisfies IsClass() and, if so, unconditionally adds it to the map. BuildClasses assert that a topic matching IsClass exists in the map.

One way to get test coverage for this line is to export BuildClasses and test it. But that seems like it goes against the spirit of making the codebase better. A better approach is to ask what this line is trying to do.

Interlude: Expectations, errors, and assertions

  • are error conditions that can happen in the wild due to poor data or input,
  • shouldn’t happen, and if they did it’s a sign of a bug in our code, or
  • shouldn’t happen, and if they did it’s a sign of cosmic radiation.

I decided to differentiate these. If my test coverage report complains about an uncovered:

  • error condition — I should test it. If I can’t, I should refactor my code to make it testable;
  • assertion that might indicate a bug, some code golf to make these always run might be in order (more on that in a bit);
  • assertion that’s totally impossible — maybe I should delete it.

I refer to #1 as an error condition — test these.

For assertions, I often found that the line between #2 and #3 is often the function boundary (this isn’t always true). Cases of intra-function assertions (like Case Study A above) seem so useless that we’re better off removing them. Cases of inter-function assertions (like this case) seem useful enough to stay.

The fix

  • Normal error conditions: if + throw, or similar.
  • Bug assertions: assert and assertXyz variants.

With that, I ended up with this change:

Here, thinking about covering a line of code fundamentally helped me communicate what my code does more effectively. A lot of the “moving things around” that I had to do is very much semantic code golf (that happily happens to give a better test coverage score), but I’d like to think it’s a net positive.

Positive: Restructure code to achieve compile-time guarantees rather than run-time guarantees

I’ll be more specific: My code has a parseComment function that uses htmlparser2 to turn HTML comments in JSDoc tagged comments. In that code, we define a new htmlparser2.Parser that handles known tags and throws on unknown tags. It looks something like this:

Initially, lines 11 and 18 in the above snippet where uncovered. Line 11, in onopentag, is easy — I added a test for an unknown tag and saw it failed. Great — our test coverage now includes this line.

I couldn’t get line 18 covered, though. If an unknown opening tag exists, it will throw on open. A self-closing tag counts as an open tag. Closing a never-opened tag is bad HTML, but doesn’t actually register as a closed tag. This is valid HTML: An end tag token with no matching start tag is not a valid token.

In other words, line 18 will never trigger.

The fix

That’s kind of useful. But the nice thing about using TypeScript is that we can structure our code so that run-time guarantees are turned into compile-time guarantees.

In our case, this change made things much better. Here’s a summarized version:

By unifying tag handlers between open and close tags, a user won’t forget to add a closed tag handler, or vice-versa, as the compiler defines a unified interface for what handling a tag looks line.

In general, assertions and guarantees about parallel code structures (e.g. parallel switch cases, etc.) and parallel data structures (e.g. parallel lists or maps) are tenuous. If you have two structures (code or a data structure) that need to agree about what elements they have, etc. you might often benefit from rearranging them structurally. Google’s Testing Blog covered the data structure version of this in its Code Health: Obsessed With Primitives? episode.


Uncovering Historical Choices and Fixes

Ideally, as one introduces fixes for edge cases, they will also introduce the tests that exercise them. But if you ever skip introducing a test, you might never get around to it; until you look at the results of your test coverage run, that is.

Once we see an uncovered line, it becomes a great opportunity to track why it was added and retroactively add the tests for it. Tools like git blame and git log are great resources to pinpoint the edge case in time and find the relevant commit message.

I ran into two of these cases: a parser-level term that was being skipped and some specific enum handling. Both of these looked like big issues in Schema.org 3.4 that were silently fixed after. In both cases, I added comments explaining the triple that causes this, where to find it, and added tests exercising this case. Great!


Opportunity to Simplify Code

Here’s an example:

Here, the tests never covered line 25. This makes sense — this line only happens if parentNode is null and propLiteral has zero members. Except propLiteral will always have at least one member (@id) when it has no parent.

The other interesting part about this branch is that propLiteral is itself a type literal node. Therefore, if propLiteral.members.length === 0, it would be equivalent to the node being returned in line 25 — an empty type literal node.

Here, the fix was simple:


Reflections on 100

  • fix a lot of bugs,
  • catch and fix redundancies in my code,
  • reason about my unreachable code,
  • delineate thoughtfully between error cases and assertions, and
  • thoughtfully shift run-time assertions into compile-time guarantees.

Was it worth it? One way to take a crack at this is to quantify the harm of the downsides and compare it to the gain from the upsides. Using this method, for me, getting to 100 is clearly a net positive.

The real question, however, is not only whether the downsides are worth the upsides: but whether the upsides could have been achieved without the downsides. This, I think, is where the real opposition to 100% test coverage comes from.

I think I could have achieved some of these downsides by looking at test coverage runs without obsessing over getting to 100%. Maybe most, I’m not sure. Yet, I find it true that:

  • obsessing over every single uncovered line helped me think about reachability in a way I wouldn’t have if I just glanced at an unreached line;
  • some of the interesting cases outlined above might not have been caught in a sea of red.

To me, a lot of the interesting reflections I’ve had over the code and a lot of the interesting changes I made to the code, seem like they would have been hard to spot and think about if I wasn’t already at 95% coverage.

Ultimately, my sense is that 100% test coverage is definitely a useful exercise to do at least once. It’s probably also a useful exercise to do more than once. For me and schema-dts, I’ll be keeping it at 100% test coverage. How can I not — after all, 100 is just a really, really nice number!

Better Programming

Advice for programmers.

Eyas

Written by

Eyas

Software Engineer living in Brooklyn, NY. MIT Computer Science S.B. ’13, M.Eng. ‘14. From Amman, Jordan. Interested in politics, current affairs, and technology

Better Programming

Advice for programmers.

More From Medium

More from Better Programming

More from Better Programming

More from Better Programming

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade