Not all code is created equal.

Tests shouldn’t be either.

Etienne Morin
4 min readOct 11, 2018

As software professional, there is a lot of debate on which testing strategy is best suited to test your software.

There are models to represent the testing type mix that will best cover your software, balancing speed, confidence, or value. For example, there is Mike Cohen’s test automation pyramid, which recommends writing many unit tests, fewer service tests, and few UI tests. More recently, there is Kent c. Dodds’ Testing Trophy which heavily favors integration tests, then end-to-end, then unit, then static tests — some mix that best suits your team.

Aspiring to write proportional amounts of tests to fit these models misses the point. It is trying to apply what worked successfully an unrelated project with your own context specific project. Instead, the tests you write should be based on the type of software and code that you write.

The testing strategy should depend on the type of code under tests, nothing else.

Let’s define some types of code. While there are many more, the ones that I would like to highlight are:

  1. Algorithmic code
  2. Integration code

Algorithmic code

Algorithmic code has a well-defined specification that is unlikely to change.

Examples include:

  • Mathematical functions
  • Scientific functions
  • Image processing
  • Signal processing
  • Text processing
  • Libraries in general!

I say algorithmic code is unlikely to change in the sense that once the underlying model is understood, it correctly describes the phenomenon (it may be in a state of flux until it is fully understood). For example, once a scientific library implementing the Law of Universal Gravitation has been produced, it is unlikely to change since our understanding of physics probably won’t change.

Unit tests are best suited for algorithmic code.

I’d specifically want to point out that while business logic involves decision trees and logical flowcharts, it is very likely to change due to ever-changing business environment and technology ecosystem, therefore I would not define it as algorithmic code.

There is no need to mock anything for unit testing as your algorithmic code should have no dependencies, other than primitive types or simple objects specific to your library. Write blazing fast, in-memory unit tests. Use TDD to your heart’s content. Enjoy instantaneous feedback. Unit tests are great, life is good.

If you’re writing unit tests for code that isn’t algorithmic in nature, I bet you’ll hurt yourself and your team in the (near) future and I would question if the unit test is even providing any real business value or if it is only incurring costs and slowing down your velocity.

Integration Code

Integration code is the glue that ties in all your software modules and components together. Simplistically, it’s the controller in model-view-controller. By definition, your integration code has dependencies and it’s making these dependencies talk to each other. If you’re writing a RESTful API, your endpoint will likely call some data source and possibly some other service to obtain the resource to return.

Integration tests are best suited for integration code.

The only way to test if your code really works is to really test it with the dependencies actually doing the real thing.

Robert C. Marin argues that anything which isn’t fast should be discarded and since integration tests are slow so we should look for quicker alternatives, such as unit tests with stubs, mocks and fakes to remove the dependencies. I couldn’t disagree more.

The solution for slow integration tests is to make the run faster. Not to substitute them for an inadequate alternative.

Run the integration tests in parallel, swap your on-disk database to an in-memory database, reduce the redundant steps within your test suite, remove duplicate test cases, don’t cover all possible permutations but cover reasonable use cases, etc… There’s an initial setup cost for integration tests, but this is the only type of testing that will provide value for your integration code.

Why test anyway?

We write software tests to ensure that our code will continue to produce the same results while other parts of the system are being modified.

Software tests are written to prevent change.

Let that sink in a minute.

Given that, it is very important to be judicious in the type of testing that you choose for testing your code. If your team velocity is slowing down as the number of tests is increasing, the pain might be self-inflicted.

A note on stubs, mocks and fakes

I’ve been weary of stubs, mocks, and fakes ever since I learned about them. Sure I jump in the bandwagon and gave them a whirl, but ultimately, a nagging feeling lead to this insight:

The problem with testing using stubs, mocks and fakes is that the code under test has been incorrectly categorized as algorithmic code instead of integration code.

You are reaching for the wrong tool in the toolkit — you grabbed the unit test tool instead of grabbing the integration test tool!

Maybe there is some logic or algorithm present in that code (it’s likely business logic!), but it’s cluttered with calls to dependencies. If you lean towards quicker test execution, this code in question would be tested with unit tests using stubs, mocks, and fakes. My approach to this would be to use integration tests, as the code is integrating components together.

Categorizing code is part of software architecture

Identifying what type of code you’re dealing with not only tells you what type of tests you should write, but it also tells you where it should live within your code base. Naturally, once code has been identified as algorithmic, it should be given a home with other related code, not buried deep inside the integration code.

If the code is properly categorized and segregated, then choosing the right testing strategy for it becomes self-evident.

--

--