Lessons I’ve learned while writing and maintaining tests over years

Practical advice about how to write tests that are useful, going beyond correctness.

Alessandro Ferlin
THRON tech blog
4 min readSep 10, 2019

--

During my career as a developer I’ve gone through several phases about tests; I started to write tests based on implementation, as a first shot it was enough. At some point in my career, I found myself writing more libraries than stand-alone projects and usually these libraries had different implementation for the same trait, in a couple of weeks I realized that the number of tests was increasing quickly; this is where I’ve changed my mind and I’ve started to write tests based on trait instead of implementation: this was a game-changer for me, I was unconsciously switching from behavior verification to state verification (1), in other words switching from mocking to state.

The state verification checks that the state after execution is the expected one.

The behavior verification checks that the implementation behaves in a specific way.

This mindset change was really important to me, I saw with my eyes how cool and more “friendly” a test suite is when it is able to withstand implementation changes and refactoring. After a couple of projects I found other characteristics a test suite should cover so I’ve written them down;

IMHO the following points must be totally covered, your test suite must be:

  • blazing fast to run, let’s say that all tests have to run in a few seconds
  • runnable in simple steps, just one command :)
  • strong to the refactors or evolutions, a pleasure to work with :)
  • helpful to figure out why something went wrong, it has to become your friend, not your enemy

I’ll share my experience on how we are working to achieve those objectives;

Fast

As a first step, remove anything that is not essential to the test itself: those things steal time and does not improve testing results. As an example we dropped all steps that remove the data created by the test itself, instead, we use ephemeral stuff (docker will help you to do this).

Still not enough? Run all tests in parallel! This is possible if each test is isolated from other, improving your test isolation is a high-reward task; e.g. to test a CRUD on MongoDB we generate for each test a random collection, in this way each test has its collection.

Super easy to run

If tests are easy to run people are more willing to run them and write them: ensure your tests are as easy to run as possible, ensure they are launched automatically or at least with the least effort possible: 1 command maximum is viable, 2 is way too much. This point can be achieved in many ways, the most important are:

  • your tests must have few dependencies to be run
  • your tests must run automatically (include them on your CI)
  • provide a script that does all stuff needed to run the test suite (thus reducing invocation to 1 command only)

We are all lazy, and developers are especially allergic (rightfully so!) to performing repeating tasks or, worse, having to manually launch things that could be automated.

Survive refactors or evolutions

Maintainability is important, to keep it simple we have to write tests that don’t need to be changed (or change just a little) if we do a refactor or if we change an implementation.

Implementation change more frequently than an interface, so writing tests that are implementation agnostic helps us to achieve the requirement, in other words, test the state, not the implementation behavior. In this way, we can use the same test to verify different implementations and changes in the implementation will still use the same tests used on the older ones, write once and test them all (more or less)!

Let’s see an example: almost every project has a layer that interacts with a database, we have written a common library that makes easy to write a simple CRUD for a specific database, in this library, we have a common trait and we have written test for this trait instead of a test for each implementation:

Each implementation extend this BaseCrudSpec to get the test to pass, e.g. the implementation with DynamoDB doesn’t have tests, it just extends BaseCrudSpec and provides all stuff needed to run them.

Of curse sometimes a specific test based on the implementation is needed, but they will be few, so it should be considered an exception.

Your friend, not your enemy

If you refactor something or if you do some evolution to your project you must rely on your test (the new one and the old one), and when they fail the cause must be super clear. This step is really difficult to achieve because it relies on the skills and experience of who writes the tests.

A good and simple way to check if your test is a good test is asking yourself the following questions:

  • Did your test fail because of just one reason?
  • Is your test testing a single feature/behavior?
  • Is your test independent from the others?
  • Can you keep the test as it is if anyone changes the implementation?
  • Have you tried checking if your test fails as you expect?

You should reply yes to at least three of them and your test will be, most likely, a “good boy”.

Conclusions

This summary has been created by practical experience in my current company, it might now work for everyone but it sure is something that could be described as part of “the things I wish someone told me when I started writing tests”.

Is there any other idea or practice that you feel I should have put into this brief list? Please let us know in the comments below.

[1]. https://martinfowler.com/articles/mocksArentStubs.html

--

--

Alessandro Ferlin
THRON tech blog

Backend Competence Leader and Backend developer @ THRON. Functional Programming enthusiast. Music and drone lover.