The 100% Code Coverage Myth
There’s a lot of advice around the internet right now saying that 100% coverage is not a worthwhile goal.
I strongly disagree.
Usually, code being hard to test is a sign that is needs to be refactored.
I get it. A few years ago, I sucked at testing. I thought it was just something that would make me move more slowly.
It simply wasn’t a thing that people did very often when I started coding. If it was, it was often a separate QA team that was responsible. A few years back though it became a real hot topic. Interviews started expecting candidates to know how to write tests, and more organizations were pushing it from the top down as a quality initiative.
I always strive to be at the top of my game, and I decided walking into interviews and saying “testing isn’t really my strong suit” was no longer a good look, so I decided I was going to get 100% coverage on all of my tests from then on.
At the time, I wasn’t really sure what benefits I’d get out of it, or if there really were any.
Now, I honestly wouldn’t go back. When something breaks in a code base with 100% coverage, it is very likely your tests will tell you exactly where and how.
This isn’t to say unit testing is all you need. It isn’t. But leaving code untested is not a good option in my opinion either.
Come back with me, to a time when I didn’t believe in the benefits of test coverage either.
Part 1: Learning the Lingo
At the time, the tools of the trade were a combination of mocha, sinon, and chai. Mocha was the test-runner, sinon provided the ability to create “mocks” and “spies”, and chai is an assertion library, so you can type assertions in a human language friendly manner.
I basically had no idea what any of this meant. Before I could be effective, the first thing to do was learn the language.
So, first thing’s first — the hell is a spy or a mock?
Although the first thing that comes to mind is James Bond or Ethan Hunt. That is definitely not what we are talking about here, though it isn’t a terrible metaphor.
After reading some documentation I eventually learned that a spy is a function that has been modified by a testing framework to provide meta information about how it has been used. It spies on it. Kinda like how people could spy on you with Apple’s recent FaceTime Bug. So kinda like James Bond.
A mock is similar to a spy but it has been modified even more. As well as providing and keeping track of how a particular function has been used, it also changes its behavior to be predictable.
I also learned there are several types of testing. Not limited to the three most common: Unit Testing, Integration Testing, and E2E Testing.
When we are “unit testing” that means we need to be able to break down our code into individual units. Anything outside of that particular unit is a candidate to be mocked, such as other functions or entire modules. Jest is my tool of choice for unit testing. Unit testing is the only type of testing where coverage is measured.
When we are Integration Testing, we are testing the integration of our software with other pieces of software, such as a test that passes a message through Kafka that our service should receive, and that the result of that can be found in the database afterward. I also usually reach for Jest when creating Integration tests.
E2E Testing is kinda like a bot using your app. You program it to load the site in a browser, click things, and ensure everything works as expected from a user’s perspective. Cypress is my favorite tool on this area, but that didn’t exist back when I was learning. Selenium was the big player of the day, and to be honest, it was a big enough domain I was happy to let a QA Automation Engineer handle that part.
With new knowledge in hand now came the hard part: putting it to practice.
I spent several months making sure every single piece of code I wrote had test coverage. At first, I admit, it was quite difficult. I spent a lot of time on StackOverflow looking up mocking and spying examples. By the end I found that I the amount of confidence I had in my code was substantially higher.
Another benefit was when something broke my tests would usually tell me exactly where. When other engineers made changes to code that I made I could review it much more quickly. When important APIs changed, people were alerted via a failing test and either quickly updated it or gave their changes a second thought.
More than that, I started writing better code. I learned that usually if something is hard to test, or hard to fully cover, it usually meant I didn’t write that code very well, and it could be refactored resulting in more maintainable and flexible APIs. To that end, trying to reach 100% coverage encouraged me to extract anonymous functions into named functions, and to understand partial application and dependency injection in many refactors.
After getting integrations tests down as well, I even gave up GitFlow for trunk-based development. Committing to master was something I thought was crazy a few years back, and now I do it on a team of nearly 15 engineers every day.
Part 2: Lead by Example
Around the time I was getting pretty confident with my new testing stack, another tool was introduced to the market which many claimed made unit testing even simpler: Jest.
Jest is an automated testing framework pioneered by Facebook.
Jest does a really awesome job condensing the previous libraries I had used into a single coherent framework that is a test-runner, as well as a set of APIs for mocking, spying, and assertions. Beyond providing a single library with all your unit-testing needs, Jest does a great job at simplifying some of the concepts and patterns as well with powerful and simple mocking.
Because I think Jest is simpler to use and to understand, I’m going to stick with Jest for examples.
If you’re just joining me on this article, that’s fine —what you’ve read so far is meant to stand on it’s own. However, I’ve been documenting the process of building a React application using Parcel with Streaming SSR, and this article is going to continue where the last part left off.
In my last article, linked below, I showed how to set up Jest with code coverage and said in the next article I’d show how to get the coverage up to 100%.
I figured the best way to demonstrate 100% coverage is showing how to get there. Throughout the journey we will likely discover several places where code can be refactored to be more testable. So, I’ll continue where I left off, and get coverage of this project to 100%, and show what refactors to make, where to use partial application and dependency injection, and what to mock along the way when coverage is difficult to get.
So… Let’s get started. Here’s the project I’ll be working on:
The project has a react app in the app
folder, and a server
folder which contains the SSR logic. Let’s start with the application tests.
Application Tests
In the last article, after configuring Jest, I got started with a simple test for a simple component. I have several React components that are equally as simple.
This is one of the reasons that functional components are really powerful. Functions are easier to test than classes. They don’t have state — instead they have inputs and outputs. Given input X, they have output Y. When there is state it can be stored externally to the component.
The new React Hooks API is nice in this regard because it encourages making functional components, and has an easily mockable mechanism to provide state to the component. Redux provides the same benefit in regards to testing.
Let’s start by knocking out the rest of the simple components. We basically just need to render them and maybe check that some important pieces of info are rendered.
I usually put code inline in the articles, but there’s not really anything new in these tests, so instead I’ve decided to link to the actual commits and only show one full example:
Let’s take a look at the About page:
import React from 'react'
import Helmet from 'react-helmet-async'
import Page from '../components/Page'const About = () => (
<Page>
<Helmet>
<title>About Page</title>
</Helmet> <div>This is the about page</div>
</Page>
)export default About
And it’s tests:
import React from 'react'
import { shallow } from 'enzyme'
import About from 'app/pages/About.jsx'describe('app/pages/About.jsx', () => {
it('renders About page', () => {
expect(About).toBeDefined()
const tree = shallow(<About />)
expect(tree.find('Page')).toBeDefined()
expect(
tree
.find('Helmet')
.find('title')
.text()
).toEqual('About Page')
expect(tree.find('div').text()).toEqual('This is the about page')
})
})
All of the tests in the following commits are very similar:
As you can see, just making sure our component renders is enough for these components to get 100% coverage. More detailed interactions are better left to E2E tests, which is out of scope for the current article.
The next component, app/App.jsx
is slightly more complex. After writing a rendering test, you’ll notice there is still an unreachable anonymous function that is used in the Router to render the About page.
In order to access and test this, we want to make a small refactor, extracting the function to a named function so we can export it and test it out.
Now it is easy to test:
Because we have another set of tests for the About page above, we’ll leave its more specific tests to live there, and just need to check that it renders here.
And with that, the only file left to test in our application is app/client.js
, and then we can move on to finishing up server side tests.
Let’s take a look at the code:
import React from 'react'
import ReactDOM from 'react-dom'
import { HelmetProvider } from 'react-helmet-async'
import { BrowserRouter } from 'react-router-dom'
import { rehydrateMarks } from 'react-imported-component'
import importedComponents from './imported' // eslint-disable-line
import App from './App'const element = document.getElementById('app')
const app = (
<HelmetProvider>
<BrowserRouter>
<App />
</BrowserRouter>
</HelmetProvider>
)// In production, we want to hydrate instead of render
// because of the server-rendering
if (process.env.NODE_ENV === 'production') {
// rehydrate the bundle marks
rehydrateMarks().then(() => {
ReactDOM.hydrate(app, element)
})
} else {
ReactDOM.render(app, element)
}// Enable Hot Module Reloading
if (module.hot) {
module.hot.accept()
}
The first thing I notice is that there is a reliance on global variables — document
, process
and module
. The second thing is that nothing is exported so it may be hard to run multiple times with different inputs.
We can remedy this with a few refactors:
- Wrap up all of the logic into a function that we can export. This function will accept an options objects with all of its dependencies. This is called dependency injection. This will allow us to easily pass along mock versions of a bunch of things if we so choose.
- We have an anonymous function in production mode after rehydrating which should be extracted to a named function.
We also will want to mock a few of the external modules: react-dom
, react-imported-component
, and app/imported.js
. Modules are a form of dependency injection themselves.
First here’s the newly refactored file with the changes in bold:
import React from 'react'
import ReactDOM from 'react-dom'
import { HelmetProvider } from 'react-helmet-async'
import { BrowserRouter } from 'react-router-dom'
import { rehydrateMarks } from 'react-imported-component'
import importedComponents from './imported' // eslint-disable-line
import App from './App'// use "partial application" to make this easy to test
export const hydrate = (app, element) => () => {
ReactDOM.hydrate(app, element)
}export const start = ({
isProduction,
document,
module,
hydrate
}) => {
const element = document.getElementById('app')
const app = (
<HelmetProvider>
<BrowserRouter>
<App />
</BrowserRouter>
</HelmetProvider>
) // In production, we want to hydrate instead of render
// because of the server-rendering
if (isProduction) {
// rehydrate the bundle marks from imported-components,
// then rehydrate the react app
rehydrateMarks().then(hydrate(app, element))
} else {
ReactDOM.render(app, element)
} // Enable Hot Module Reloading
if (module.hot) {
module.hot.accept()
}
}const options = {
isProduction: process.env.NODE_ENV === 'production',
document: document,
module: module,
hydrate
}start(options)
Now we can actually access and test start with a variety of options as well as testing hydrate independently of the startup logic.
The tests are a bit long, so I’ve put comments inline to explain what is going on. Here are tests for the file:
import React from 'react'
import fs from 'fs'
import path from 'path'
import { start, hydrate } from 'app/client'
import { JSDOM } from "jsdom"jest.mock('react-dom')
jest.mock('react-imported-component')
jest.mock('app/imported.js')// mock DOM with actual index.html contents
const pathToIndex = path.join(process.cwd(), 'app', 'index.html')
const indexHTML = fs.readFileSync(pathToIndex).toString()
const DOM = new JSDOM(indexHTML)
const document = DOM.window.document// this doesn't contribute to coverage, but we
// should know if it changes as it would
// cause our app to break
describe('app/index.html', () => {
it('has element with id "app"', () => {
const element = document.getElementById('app')
expect(element.id).toBe('app')
})
})describe('app/client.js', () => { // Reset counts of mock calls after each test
afterEach(() => {
jest.clearAllMocks()
}) describe('#start', () => {
it('renders when in development and accepts hot module reloads', () => {
// this is mocked above, so require gets the mock version
// so we can see if its functions are called
const ReactDOM = require('react-dom')
// mock module.hot
const module = {
hot: {
accept: jest.fn()
}
} // mock options
const options = {
isProduction: false,
module,
document
} start(options)
expect(ReactDOM.render).toBeCalled()
expect(module.hot.accept).toBeCalled()
})
it('hydrates when in production does not accept hot module reloads', () => {
const ReactDOM = require('react-dom')
const importedComponent = require('react-imported-component')
importedComponent.rehydrateMarks.mockImplementation(() => Promise.resolve()) // mock module.hot
const module = {} // mock rehydrate function
const hydrate = jest.fn() // mock options
const options = {
isProduction: true,
module,
document,
hydrate
} start(options)
expect(ReactDOM.render).not.toBeCalled()
expect(hydrate).toBeCalled()
}) }) describe('#hydrate', () => {
it('uses ReactDOM to hydrate given element with an app', () => {
const ReactDOM = require('react-dom')
const element = document.getElementById('app')
const app = (<div></div>)
const doHydrate = hydrate(app, element) expect(typeof doHydrate).toBe('function') doHydrate()
expect(ReactDOM.hydrate).toBeCalledWith(app, element)
})
})})
Now when we run our tests, we should have 100% coverage of the app
folder, aside from app/imported.js
which is a generated file, and doesn’t make sense to test as it could generate differently in future version.
Let’s update our jest config to ignore it from coverage statistics, and check out the results.
In jest.config
add:
"coveragePathIgnorePatterns": [
"<rootDir>/app/imported.js",
"/node_modules/"
]
Now when we run npm run test
we get the following results.
Something that I want to point out, is that while I’m developing tests, I’m usually using “watch” mode to do so, so as tests are changed they are automatically re-run.
With application tests done, let’s move on to the server.
Server Tests
In the previous article I wrote tests for one application file, as well as one server file, so we already have tests for server/index.js
. Now we need to test the three remaining files in server/lib
.
Let’s start with server/lib/client.js
:
import fs from 'fs'
import path from 'path'
import cheerio from 'cheerio'export const htmlPath = path.join(process.cwd(), 'dist', 'client', 'index.html')
export const rawHTML = fs.readFileSync(htmlPath).toString()export const parseRawHTMLForData = (template, selector = '#js-entrypoint') => {
const $template = cheerio.load(template)
let src = $template(selector).attr('src') return {
src
}
}const clientData = parseRawHTMLForData(rawHTML)const appString = '<div id="app">'
const splitter = '###SPLIT###'
const [startingRawHTMLFragment, endingRawHTMLFragment] = rawHTML
.replace(appString, `${appString}${splitter}`)
.split(splitter)export const getHTMLFragments = ({ drainHydrateMarks }) => {
const startingHTMLFragment = `${startingRawHTMLFragment}${drainHydrateMarks}`
return [startingHTMLFragment, endingRawHTMLFragment]
}
First off, I’ve noticed there’s a pretty big block of code that isn’t even used in the project from a previous abandoned strategy. Everything from export const parseRawHTMLForData
through const clientData
.
I’m gonna start by deleting that. The less code there is, the less places bugs can exist. There’s also a couple of exports which I never made use of which can stay private to the module.
Here’s the updated file:
import fs from 'fs'
import path from 'path'const htmlPath = path.join(process.cwd(), 'dist', 'client', 'index.html')
const rawHTML = fs.readFileSync(htmlPath).toString()const appString = '<div id="app">'
const splitter = '###SPLIT###'
const [startingRawHTMLFragment, endingRawHTMLFragment] = rawHTML
.replace(appString, `${appString}${splitter}`)
.split(splitter)export const getHTMLFragments = ({ drainHydrateMarks }) => {
const startingHTMLFragment = `${startingRawHTMLFragment}${drainHydrateMarks}`
return [startingHTMLFragment, endingRawHTMLFragment]
}
It looks like one test should probably do it for this one. However, there’s a slight hiccup in the plan: this file depends on the build being run before as it reads in the generated build.
Technically this makes sense, because you’d never try to render the app on the server without having a built app to render.
Given that constraint I’d say it’s ok, and probably isn’t worth the effort to refactor given we can just make sure our pipeline calls build before test. If we wanted to have really pure unit isolation we might consider refactoring a bit more as technically the whole application is a dependency of SSR, so it could be mocked. On the other hand, using the actual build is probably more useful anyway. You’ll frequently encounter trade-offs like this throughout the process of writing tests.
With that being said, here is the test to get full coverage for this module:
import { getHTMLFragments } from 'server/lib/client.js'describe('client', () => {
it('exists', () => {
const drainHydrateMarks = '<!-- mock hydrate marks -->'
const [start, end] = getHTMLFragments({ drainHydrateMarks })
expect(start).toContain('<head>')
expect(start).toContain(drainHydrateMarks)
expect(end).toContain('script id="js-entrypoint"')
})
})
And the commits: fix: remove unused code for parsing template, test: server/lib/client tests.
Next, server/lib/server.js
is quite tiny, so let’s knock that one out. Here is its code to refresh your memory, or if you’re just joining us now:
import express from 'express'export const server = express()
export const serveStatic = express.static
And the tests:
import express from 'express'
import { server, serveStatic } from 'server/lib/server.js'describe('server/lib/server', () => {
it('should provide server APIs to use', () => {
expect(server).toBeDefined()
expect(server.use).toBeDefined()
expect(server.get).toBeDefined()
expect(server.listen).toBeDefined()
expect(serveStatic).toEqual(express.static)
})
})
Seems how we are basically just deferring all the responsibility to express, and we expect express to provide this contract, we can just simply make sure it does, and it doesn’t really make sense to go beyond this.
Finally, we have only one more file to test: server/lib/ssr.js
.
Here’s our ssr
module:
import React from 'react'
import { renderToNodeStream } from 'react-dom/server'
import { HelmetProvider } from 'react-helmet-async'
import { StaticRouter } from 'react-router-dom'
import { ServerStyleSheet } from 'styled-components'
import { printDrainHydrateMarks } from 'react-imported-component'
import log from 'llog'
import through from 'through'
import App from '../../app/App'
import { getHTMLFragments } from './client'
// import { getDataFromTree } from 'react-apollo';export default (req, res) => {
const context = {}
const helmetContext = {} const app = (
<HelmetProvider context={helmetContext}>
<StaticRouter location={req.originalUrl} context={context}>
<App />
</StaticRouter>
</HelmetProvider>
) try {
// If you were using Apollo, you could fetch data with this
// await getDataFromTree(app); const sheet = new ServerStyleSheet()
const stream = sheet.interleaveWithNodeStream(
renderToNodeStream(sheet.collectStyles(app))
) if (context.url) {
res.redirect(301, context.url)
} else {
const [startingHTMLFragment, endingHTMLFragment] = getHTMLFragments({
drainHydrateMarks: printDrainHydrateMarks()
})
res.status(200)
res.write(startingHTMLFragment)
stream
.pipe(
through(
function write (data) {
this.queue(data)
},
function end () {
this.queue(endingHTMLFragment)
this.queue(null)
}
)
)
.pipe(res)
}
} catch (e) {
log.error(e)
res.status(500)
res.end()
}
}
It’s a bit long, and there are a few paths to execute. I do want to make a couple small refactors that will make isolation a bit easier, such as extracting the logic to generate the app out to a separate function, and using partial application to be able to inject the application stream renderer so we can easily mock some redirects.
Also write
and end
are a bit tough to get to, so we can pull those out higher using partial application as well.
Here’s an updated version:
import React from 'react'
import { renderToNodeStream } from 'react-dom/server'
import { HelmetProvider } from 'react-helmet-async'
import { StaticRouter } from 'react-router-dom'
import { ServerStyleSheet } from 'styled-components'
import { printDrainHydrateMarks } from 'react-imported-component'
import log from 'llog'
import through from 'through'
import App from '../../app/App'
import { getHTMLFragments } from './client'
// import { getDataFromTree } from 'react-apollo';const getApplicationStream = (originalUrl, context) => {
const helmetContext = {}
const app = (
<HelmetProvider context={helmetContext}>
<StaticRouter location={originalUrl} context={context}>
<App />
</StaticRouter>
</HelmetProvider>
) const sheet = new ServerStyleSheet()
return sheet.interleaveWithNodeStream(
renderToNodeStream(sheet.collectStyles(app))
)
}export function write (data) {
this.queue(data)
}// partial application with ES6 is quite succinct
// it just means a function which returns another function
// which has access to values from a closure
export const end = endingHTMLFragment =>
function end () {
this.queue(endingHTMLFragment)
this.queue(null)
}export const ssr = getApplicationStream => (req, res) => {
try {
// If you were using Apollo, you could fetch data with this
// await getDataFromTree(app); const context = {}
const stream = getApplicationStream(req.originalUrl, context) if (context.url) {
return res.redirect(301, context.url)
} const [startingHTMLFragment, endingHTMLFragment] = getHTMLFragments({
drainHydrateMarks: printDrainHydrateMarks()
}) res.status(200)
res.write(startingHTMLFragment)
stream.pipe(through(write, end(endingHTMLFragment))).pipe(res)
} catch (e) {
log.error(e)
res.status(500)
res.end()
}
}const defaultSSR = ssr(getApplicationStream)export default defaultSSR
Here’s a link to look at the diffs in Github: chore: refactor ssr to break it up / make it easier to read, and chore: refactor ssr more.
Now let’s write some tests. We’ll need to set the jest-environment for this file specifically for node otherwise the styled-components portion will not work.
/**
* @jest-environment node
*/
import defaultSSR, { ssr, write, end } from 'server/lib/ssr.js'jest.mock('llog')const mockReq = {
originalUrl: '/'
}const mockRes = {
redirect: jest.fn(),
status: jest.fn(),
end: jest.fn(),
write: jest.fn(),
on: jest.fn(),
removeListener: jest.fn(),
emit: jest.fn()
}describe('server/lib/ssr.js', () => {
describe('ssr', () => {
it('redirects when context.url is set', () => {
const req = Object.assign({}, mockReq)
const res = Object.assign({}, mockRes)
const getApplicationStream = jest.fn((originalUrl, context) => {
context.url = '/redirect'
})
const doSSR = ssr(getApplicationStream) expect(typeof doSSR).toBe('function')
doSSR(req, res)
expect(res.redirect).toBeCalledWith(301, '/redirect')
}) it('catches error and logs before returning 500', () => {
const log = require('llog')
const req = Object.assign({}, mockReq)
const res = Object.assign({}, mockRes)
const getApplicationStream = jest.fn((originalUrl, context) => {
throw new Error('test')
})
const doSSR = ssr(getApplicationStream)
expect(typeof doSSR).toBe('function')
doSSR(req, res)
expect(log.error).toBeCalledWith(Error('test'))
expect(res.status).toBeCalledWith(500)
expect(res.end).toBeCalled()
})
}) describe('defaultSSR', () => {
it('renders app with default SSR', () => {
const req = Object.assign({}, mockReq)
const res = Object.assign({}, mockRes)
defaultSSR(req, res)
expect(res.status).toBeCalledWith(200)
expect(res.write.mock.calls[0][0]).toContain('<!DOCTYPE html>')
expect(res.write.mock.calls[0][0]).toContain(
'window.___REACT_DEFERRED_COMPONENT_MARKS'
)
})
}) describe('#write', () => {
it('write queues data', () => {
const context = {
queue: jest.fn()
}
const buffer = new Buffer.from('hello')
write.call(context, buffer)
expect(context.queue).toBeCalledWith(buffer)
})
}) describe('#end', () => {
it('end queues endingFragment and then null to end stream', () => {
const context = {
queue: jest.fn()
}
const endingFragment = '</html>'
const doEnd = end(endingFragment)
doEnd.call(context)
expect(context.queue).toBeCalledWith(endingFragment)
expect(context.queue).toBeCalledWith(null)
})
})
})
As this file was a bit more complex than some of the others it took a few more tests to hit all of the branches. Each function is wrapped in its own describe block for clarity.
Here is the commit on Github: test: ssr unit tests.
Now, when we run our tests we have 100% coverage!
Finally, before wrapping things up, I’m going to make a small change to my jest.config
to enforce 100% coverage. Maintaining coverage is much easier than getting to it the first time. Many of the modules we tested will hardly ever change.
"coverageThreshold": {
"global": {
"branches": 100,
"functions": 100,
"lines": 100,
"statements": 100
}
},
And done! Here’s the commit on Github: chore: require 100% coverage.
Conclusion
My goal for this article was to demonstrate the techniques needed to be able to refactor your code, or isolate units using mocks and dependency injection to make tough to test code easy to reach and discuss some of the merits of reaching 100% coverage. Also, using TDD from a starting point is a lot easier.
I’m a firm believer that if 100% coverage is hard to reach it’s because code needs to be refactored.
In many cases an E2E test is going to be a better test for certain things. A Cypress.io suite on top of this which loads the app and clicks around would go a long way in increasing our confidence even further.
I believe working in a codebase that has 100% coverage does a great job in increasing the confidence you have in each release and therefore increasing the velocity which you can make and detect breaking changes.
As always, if you’ve found this useful, please leave some claps, follow me, leave a star on the GitHub project, and/or share on social networks!
In the next part, coming soon, we will add a production ready Dockerfile, and explore how using nothing but another Dockerfile we can alternatively package our application as a static site served with Nginx, and some tradeoffs between the two approaches.
Best,
Patrick Lee Scott
Check out the other articles in this series! This was Part 4.