Understanding JavaScript Code Coverage
Part 2: Tests & Reporting
This is the second of a two-post series in which we try to understand how code coverage is measured and reported in JavaScript by building a simple code coverage tool ourselves. You can find the first post here and the reference code for this exercise here.
By the end of the last post, we had built an instrumenter. It takes our single-file source program and modifies it to add counters. These counters let us measure statement coverage when our tests run. The next step is to make our tests use our instrumented source code.
Test Suite
Let us assume that the following file comprises of the entire test-suite for our source program. Our goal now is to measure its code coverage.
Integration
Integrating a code coverage tool directly with a test framework makes it really easy for developers to start measuring coverage in their tests.
Istanbul — the tool that has largely been an inspiration (and a reference) for this blog post series actually manages to do this in a framework-agnostic fashion by using a couple of low-level JavaScript packages. It is an interesting approach that merits a post of its own. We are going to, instead, pick a test framework and rely on the extension points that it provides to integrate with it.
I chose Mocha, a fairly popular JavaScript test framework.
To be able to measure code coverage without modifying the tests, there are two things that we need from the framework:
- Allow for instrumentation of the source code imported by the tests.
- Allow for the collected coverage information to be shipped elsewhere.
Mocha provides both.
Let us first try to run our test-suite:
» mocha --reporter specbasic test
✓ must work as expected when the input is right
✓ must throw when percentages don't add up to 1
✓ must throw when the amount is negative3 passing (13ms)
Things look good. The next step is to make it use our instrumented source.
Compiler
A mocha
compiler plugin allows for pre-processing source files imported in tests. We define such a plugin below:
In this plugin, we explicitly force the source code of our program through our instrumenter. We let the other files pass-through.
Now, let us run the tests again after asking mocha
to use our compiler:
» mocha --reporter spec --compilers js:bin/compiler.jsbasic test
✓ must work as expected when the input is right
✓ must throw when percentages don't add up to 1
✓ must throw when the amount is negative3 passing (13ms)
No difference! This shouldn’t surprise us as we know that the instrumentation process does not (and should not) affect the behavior of the source program. However, the counters should have silently done their work under the hood. So, the next step is to stash the collected coverage information somewhere.
Reporter
A mocha
reporter plugin is typically used to visualize the test results. For example, for our previous runs, we use the spec
reporter. While the visualization function of the reporter is of little concern to us, it does provide a very nifty hook that we need: the end
event. The end
event is fired after all tests have run.
In this plugin, we do two things:
- We inherit from the Spec reporter. This will ensure that we retain the default behavior of a typical test reporter without re-implementing any of the logic ourselves.
- When the
end
event is fired i.e. after all tests have run, we read the collected coverage information from the program (remember the__coverage__
global variable that our instrumenter added?) and save it to disk as a trace file in a machine-readable format (which we will cover in the next section).
Now, let us run the tests again after asking mocha
to use our reporter:
» mocha --reporter reporter.js --compilers js:bin/compiler.jsbasic test
✓ must work as expected when the input is right
✓ must throw when percentages don't add up to 1
✓ must throw when the amount is negative3 passing (13ms)» ls -1 lcov.info
lcov.info
Note that the test results have been displayed the same way as before. So, we have managed to mimic the behavior of the previous reporter from our own. More importantly, we now have a file — lcov.info
with the coverage data ready for analysis.
Visualization
The final step is to summarize and visualize the coverage data. Code coverage tools generally ship with their own reporters. Instead of writing our own, we are going to let LCOV’s genhtml
tool handle this for us. The trace file from the previous section is in a format that is understood by genhtml
.
» genhtml -o html/ ./lcov.info
Overall coverage rate:
lines......: 92.9% (13 of 14 lines)
functions..: no data found» open html/index.html
And voila!
Summary
In this blog post series, I have tried to lay down the basic skeleton of a simple, functional code coverage tool. I hope that you found it useful.