Three practical hints that make your Unit Tests better today
Writing unit tests is not an easy task. It’s like writing code but with very specific purposes that you don’t write often. However, with these hints, it will be easier for you to write maintainable and reliable unit tests.
Table of contents
- Intro.
- Hint 1. Create spies and stubs in
beforeEach
, restore them inafterEach
. - Hint 2. Make tests independent.
- Hint 3. Use various expectations.
- Summary.
Intro
We all know that we need to write tests. Perhaps, we even have some experience in writing them. However, writing unit tests is like art. It requires some tricks, and there is no limit to perfection. So here I will share with you three hints that will make them more reliable and maintainable.
I will be giving examples based on the following tools: Karma, mocha, chai. Karma is a test runner in a browser. Mocha is a test framework that has primitives to run the tests: describe
, it
, and hooks like before
or beforeEach
. Chai is an assertion library. However, all the hints are equally applicable to the other test setups like Jest or Web Test Runner.
Hint 1: Create spies in stubs in beforeEach
, restore them in afterEach
Let's take, for example, the following feature. It makes an HTTP request in the _request
method we don’t want in tests, so we stub it. stub
allows us to define the behavior of the function for our test. In our example, the function will resolve a returned Promise right away without making an HTTP request. Then, we check the methods requestGet
or requestPost
to validate that they eventually call _request
.
So far, so good:
However, in a year, we make a refactoring of this code and accidentally miss an argument.
And the result of it:
Please note that both tests fail. The problem is an execution flow: when the assertion/expectation fails, the test execution stops. Therefore, stub.restore
does not run, which leaves it intact. When we try to stub
the very same method in the second test, sinon
tells us that we are doing something unexpected (line 26 in the result).
So the first hint is always to use beforeEach
and afterEach
for creating and restoring the spies, stubs, and others.
And here is the result of this:
Hint 2. Make tests independent
In a way, this is a similar point to the previous one. In the last hint, dependency was introduced via the shared value of the stub. However, generally speaking, dependency between the tests can also be direct via the shared state of the testing component.
Let's write the following example. The difference between the previous example is that our TestingFeature has a caching mechanism. The test will now check this mechanism as well.
Please note the following steps:
- The caching is controlled by
useCache
flag. - This flag is set only in the first test because the
testingFeature
is shared between the tests. - We test the caching mechanism by calling
requestGet
orrequestPost
two times.
And yes, so far, so good:
As always, another year, another refactoring, another bug of missing “post” string in the feature
The result of this is expected:
However, now the story goes like this:
- To debug this, we focus on this one test by adding
.only
to it. The error remains. - After some debugging time, we found a missing argument and added it.
- And now the things get weird. We have another error:
All of a sudden, absolutely working code fails. Only because we focus with .only
on a specific test, the pre-condition of the state is not fulfilled. Since it’s set in the other test.
So the second hint is to fully prepare the state of the element for each test and to check it by adding .only
to this test during the testing. One of the easiest ways is to follow the previous hint and add all the preparation code to beforeEach
. Do not hesitate to create extra describe
sections to have their own beforeEach
and afterEach
And the result of it:
Hint 3. Use various expectations
All the assertion libraries have plenty of different assert
and expect
helpers. They can help a lot to understand exactly what is expected and how the test is broken. As always, we will go from the opposite and first see a “not recommended” test.
The result of this test is beautiful:
However, let’s break the expectations to see what happens
Please note the AssertionError
. Most of them are that one boolean is not equal to another boolean. But is this really what we are trying to check? Chai provides a complete list of expectations. Let's check whether we can find something useful there.
The result of it is way clearer:
The explanations of the error are now in line with what actually happened. This saves a lot of time for future developers. If they changed the code and saw the failing tests, they would know better how to fix it.
Overall, writing tests is its own thing. Practice makes perfect, there can be bugs in the tests and it can be challenging to write them.
However, with the three hints above, your tests will be better today, and future you and your fellow developers will thank you for that.
- Photo by Ferenc Almasi on Unsplash
- Careful review by Maarten Stolte, Remco Gubbels and Gabi Wesselman