How to stop hating your test suit

Camila Campos
Jun 26, 2019 · 7 min read

This article is a translation to English. The original version can be read here:

Do we hate our tests?

“Hate” is a really strong word. With a title like this, there are two big assumptions I’m making:

  1. We write automated tests (and therefore know about some of their advantages);
  2. We hate our tests (every so often, even if only some of them)

This second assumption is QUITE bold, isn’t it? My first reaction when this supposed “hatred” came to mind was something along the lines of “NAHHH that’s not it”. Yet even the most dedicated test enthusiasts I know have, at some point, felt some very strong negative emotions towards at least one test. It may be the case that the test wasn’t even written by said person, and they were simply looking over it to understand its workings — regardless, they’ve come to feel some hate (at some moment) towards a test (or tests).

If you can’t relate to this feeling, imagine the following:

  • You look at a test and understand absolutely nothing that’s going on in it;
  • You notice that a test is absolutely failing to test what it’s meant to test; or you’re absolutely certain it was going to crash but instead, it passes;
  • You open a test and your only desire is to delete the whole thing.

These and so many other moments manifest the hatred we have for testing. Believe me, if you haven’t felt this way yet, you will!

Since this hatred does exist, since it’s real, since we all hate our tests at some point, why do we keep writing them? Why don’t we stop, even though we know we’re not going to like that test (or tests) at some point?

First of all, you have to understand why we write tests

We end up disliking our tests, so why do we write them? The answer is simple: we like it! Throughout our career as a developer, we learn that it’s cool to write tests and that writing tests is part of being a good dev (and that great devs write them before they write “the real” code). We’ve seen firsthand how writing tests improves our code’s design and quality, while also reducing the occurrence of bugs in our application. Consequently, we write them. After all, we want to be on the right side of history.

But it’s more than that, isn’t it?! We write tests just because we like to write them, or because we’re told it’s the right way to do things. We write tests because they ensure two important things in our applications:

  1. Confidence in the software we are delivering. Tests make sure what’s being tested actually works, and that it will continue to work when we give it a new function. They also ensure someone can change our code without anything crashing the whole thing, as long as the initial behavior doesn’t change (aka refactoring).
  2. An understanding of what we are writing. Testing also helps us ensure that our code is easy to use and modify (regardless of behavioral changes), thus guiding the design of our code.

So why do we hate our tests?

So we understand that there are some pretty valuable advantages to writing tests for our applications. However, if these advantages are really as dope as I’ve claimed, why do we end up disliking some (many) of our tests?

There are, among others, three reasons that all this hostility is directed towards tests:

  1. Our tests are very slow; they take so long to run that we get too lazy to even run them;
  2. Our tests are very complicated to create or maintain, and therefore we would rather ignore them;
  3. Our tests are unnecessary; we might not know exactly what they’re testing, or why X is being tested (since it might have already been covered by another test).

From now on, we will explore each of the reasons listed above in order to better understand why they become a problem and what we can do to avoid them.

Avoiding and preventing super slow tests

When we think about how long it will take to run our tests, we expect that the more tests we have, the longer they will take to run. However, we hope that this is a linear relationship (as shown below).

Our expectation is in green (displaying a linear relationship between the time it takes to run a test); the reality is in red (displaying an exponential relationship between the two).

In fact, what actually happens is exponential growth (the red line). The more tests we run, the longer it takes to run each one.

For example, if we have an application with 10 tests that will each run in 10 seconds, we hope that they will run in 100 seconds; however, what really happens is that they take 1000 seconds to run (ok, I might be exaggerating a little, but I’m just invoking my poetic license).

The test pyramid

The test pyramid

Many of these lengthy-test problems arise from a (mis)understanding of the test pyramid.

The pyramid’s basic function is to delineate the different levels of testing and the number of tests there should be at each one of these levels.

You can read more about it in another post I wrote.

At the pyramid’s base, we have tests that are (theoretically) super simple and quick: unit tests. They represent a large portion of the tests we run. In the middle, we have integration tests, which are more complex and time-consuming than unit tests. These generally test an endpoint or a specific feature that is made up of several units, but they don’t test the workings of the software as a whole. At the pyramid’s top, we have end-to-end tests. Since they cover an entire application, these are much more complex and time-consuming; hence, we run these much less.

Dope, so where’s the problem?

A more realistic view of the test pyramid, lacking distinguishing borders between its levels, with tests that occupy multiple layers

The problem is that we often have very little understanding of what each of these levels is or how they behave. As a result, our test pyramid becomes messy, with no clear emphases or differences between the types of tests, or tests that might belong to multiple levels. This results in a bunch of tests that need to be run together every time something is slightly modified, bringing about tests that take much longer than they could.

My advice for addressing this problem is to always think about your unit tests first. They’re smaller and simpler and should compose your base. Then, think about your application’s most critical flows — at most, think of 20 examples (total) — and then write end-to-end tests for them. Finally, write integration tests for all the other features of your application (you can also write tests for clauses that have already been covered by the end-to-end).

Another piece of advice: write tests for each level in separate folders, and run them only as needed. For example, whenever you change a class, run a unit test on it. When you have finished a series of modifications, run all your integration tests (especially the one for the feature you just modified). Once you’ve made sure the code is OK and ready to be production code, run all your end-to-end tests. With this, you optimize your time and run the more time-consuming tests less often.

It’s also very important to define the rules concerning how your application’s tests will handle any kind of dependencies. How will access to other classes be? What about external dependencies, like other services? Should your test access the database or not?

At Creditas, the rules are defined in the following way:

  • Unit tests access absolutely nothing outside of what is being tested — no databases, no external dependencies, no additional classes/objects. We use dependency injection to do this.
  • Integration tests access a database that is created on the spot for the tests and deleted as soon as the tests end. They also access other collaborating classes, but not any external dependencies.

End-to-end tests access everything they’re entitled to — databases, external dependencies, classes/objects used during the test’s flow. Their only restriction is that we have to try to use staging environments or environments approved by these external services (if they don’t exist, we create our own in order to avoid accessing these external dependencies “for real”).

A table which represents the rules and norms at Creditas regarding each level of testing

Concerning tests that are overcomplicated or unnecessary

So that this article doesn’t become giant, I’ve decided to break it up into a series of articles. The first (this one right here) introduces the reasons that tests might sadden us and goes into detail about the ways to speed up your tests if they’re very time-consuming.

To learn more about overcomplicated and unnecessary tests, check out the next chapter of this article ❤ Follow me on Twitter, I’ll post updates there!

This series of posts is a compilation of the information shared at RubyConfBR 2017, TDC São Paulo and Florianópolis 2018, and The Conf 2018.

Want to use technology to bring innovation to the loan market? We’re always looking for people to join our Crew!

Check out our openings here.

Creditas Tech

Our technologies, innovations, digital product management…

Medium is an open platform where 170 million readers come to find insightful and dynamic thinking. Here, expert and undiscovered voices alike dive into the heart of any topic and bring new ideas to the surface. Learn more

Follow the writers, publications, and topics that matter to you, and you’ll see them on your homepage and in your inbox. Explore

If you have a story to tell, knowledge to share, or a perspective to offer — welcome home. It’s easy and free to post your thinking on any topic. Write on Medium

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store