Introducing Checkr’s Integration testing workflow and OpenMock

Zhuojie Zhou
Checkr Engineering
Published in
5 min readOct 9, 2018

“…good (and *fast*) integration testing (both local and remote) as well as good instrumentation often serve better than striving to achieve 100% unit test coverage.” — Cindy Sridharan

Who doesn’t like integration tests?

In the world of microservices, teams at Checkr are developing increasingly complex systems that unit tests alone cannot reliably cover. For example, we have many state-machine events that go between the RabbitMQ and Kafka channels, and it’s an anti-pattern to write unit tests for the whole report lifecycle. To meet these challenges at Checkr, we developed a new integration testing workflow and our open source project OpenMock.

Integration Test Workflow

We divided our integration tests into the following steps.

  • Setup remote Docker engine. All the integration tests run in the Docker container on CircleCI. The remote Docker engine is a great feature that enables us to run any Docker command and even docker-compose inside the container.
  • Setup infra. This step spins all the infrastructures, e.g. MySQL, Kafka, RabbitMQ, MongoDB, Redis, and etc. For simplicity, we created a docker-compose.yml file for all infra dependencies.
  • Setup services. Web servers, Kafka consumers and RabbitMQ workers are then created to serve as the components of a service.
  • Setup mocks. In order to close the loop of a request lifecycle, we need to mock some dependencies. We use OpenMock to simplify this step.
  • Run tests. We run a small set of tests to assert what we expect from the APIs.

Here’s the workflow, which runs in parallel with unit tests on every pull request:

Integration tests on CircleCI

We quickly discovered that we shouldn’t test too many things in integration tests. As Martin Fowler points out:

“If your only integration tests are broad ones, you should consider exploring the narrow style, as it’s likely to significantly improve your testing speed, ease of use, and resiliency. Since narrow integration tests are limited in scope, they often run very fast, so can run in early stages of a DeploymentPipeline, providing faster feedback should they go red.” — Martin Fowler IntegrationTest

We picked the “narrow style” and mocked the interfaces rather than spinning up the direct dependencies. It enabled us to move fast with a limited scope at the beginning and then expand the scope by replacing mocks with other real services.

The mocks (blue area) and the scope of integration tests (white inner circle)

Setting up mocks is the most challenging step. For a complex microservices system with internal and external dependencies, making mocks of a service is tedious and error-prone. One has to understand the implementation details, learn all the interfaces, and write repetitive if/else code to handle different inputs for different channels.

What we want is a tool that painlessly mocks a microservice’s contract via multiple channels. Current libraries (e.g. MockServer, mountebank, Flashback) are mainly focusing on HTTP protocols; other channels like Kafka or AMQP, and the behaviors of a microservice like sending webhooks are not supported. To ease the effort, we created our solution: OpenMock.

OpenMock

OpenMock Github Page
  • OpenMock supports multiple channels. For example, a service can interact with other services via HTTP, Kafka, or AMQP.
  • OpenMock has a simple and unified DSL, which is straightforward to learn and flexible to extend.
  • OpenMock has minimum memory footprint (<5MB thanks to Go). We can deploy OpenMock as a dedicated service or include it in docker-compose.

The following are some examples of the DSL. For more details, please visit https://github.com/checkr/openmock

HTTP /ping example
- key: ping
expect:
http:
method: GET
path: /ping
actions:
- reply_http:
status_code: 200
HTTP headers token example
- key: header-token
expect:
condition: '{{.HTTPHeader.Get "X-Token" | eq "1234"}}'
http:
method: GET
path: /token
actions:
- sleep:
duration: 1s
- reply_http:
status_code: 200
Kafka to AMQP (e.g. RabbitMQ).
- key: star_wars
expect:
condition: '{{.KafkaPayload | jsonPath "name" | eq "death star"}}'
kafka:
topic: star-wars
actions:
- publish_amqp:
exchange: default
routing_key: k1
payload: "attack"

Learnings and future work

The integration tests and the OpenMock project are experiments that worked for us. It has been rolled out to the monolith and other critical microservices. At Checkr, we’re able to end-to-end test a set of Checkr APIs, including candidates creation, records matching, state machine events of a Report, and more. These integration tests run fast in parallel with unit tests, and they’ve already caught some regressions which weren’t caught by unit tests.

In the future, OpenMock will support more channels (e.g. gRPC), actions, and use cases. I’m glad we were empowered to open source it from day one. OpenMock grew together with our integration tests, and its only limit is your imagination. Please join us, try and test it, and feel free to open pull requests. Hopefully, you’ll consider it as an option if you are building your own integration tests.

Thanks to Ziru Zhu for his tremendous effort in making the integration tests reality and the Platform team for all the support.

References

--

--

Zhuojie Zhou
Checkr Engineering

Creator of Flagr and OpenMock. Contributed to Kong. Graph theory and graph sampling disruptor. k8s, cloud native evangelist. Gopher learning Rust.