The missing testing abstraction

Bogdan Zaharia
Hootsuite Engineering
7 min readAug 26, 2021

I’m sure we can all agree that writing good tests is not always easy. There are many reasons for that, some of them related to the complexity of the system itself, some of them related to the tools and patterns we use. And, in my opinion, there is a very important aspect that could make writing tests a little easier. An aspect that is often overlooked - the level of abstraction of our tests.

In this post, I’ll try to open a new perspective on this aspect. I’ll be using examples from the web frontend stack, but the concepts are transferable.

Some background

First, I want to share a story from my team’s recent experience, related to testing abstractions. Hootsuite Analytics, the app that we’re developing, uses React for the view layer, and it has been doing so since it was first launched 6 years ago.

At that time, Enzyme was the norm for testing React applications, so that’s what we used. At first, we mostly used shallow rendering, testing components individually. This approach had its merits since tests were easy to write. But it also had its serious drawbacks - the tests were brittle, and did not encourage refactorings, such as changing the component hierarchy or the state management solution.

In the meantime, React Testing Library (RTL) came along, with a radically different approach: render the whole component tree, and write tests as a real user would interact with your application. That was excellently summarized by Kent C. Dodds:

​​The more your tests resemble the way your software is used, the more confidence they can give you.

This sounded like it would solve the problems we had - we can write tests once, and change them only when requirements change, not when refactoring.

I was very excited about this new library, and I mentioned it to Ovidiu, my manager at that time (in 2019). And it sounded good to him too, since he was aware of the issues we had with our existing test suite.

But he also advised that we should not jump straight away into “yet another migration”. We had plenty of them, such as converting Mocha to Jest, or migrating to TypeScript. He said something along these lines:

We should hide the testing library implementation behind an abstraction.

The parallel abstraction

This resonated with me, so I decided to try building our own “Testing Library”. I made a POC for it fairly quickly, presented it to the team, and then started the adoption. The idea was to still use Enzyme, but behind a facade. And, at some point, we were going to swap the Enzyme implementation with RTL (or whatever library we liked at that point). That was supposed to help us migrate from Enzyme, and have a testing library with the API that best suited our needs.

Since I liked RTL’s approach so much, the new API naturally came along these lines. Actually, it became almost a 1-1 mapping:

const scene = renderScene(<ReportMenuCell />);
// render(<ReportMenuCell />);
scene.click(byTestId('my-button'));
// fireEvent.click(getByTestId('my-button'));
scene.assertMany(byRole('listitem'));
// getAllByRole('listitem');

In retrospect, this was the wrong kind of abstraction - a “parallel” abstraction. Too rigid, too coupled to the implementation. And, it turned out to make migrating to RTL harder. That’s because the initial API was designed two years ago, and RTL’s API has evolved since then, in ways that are not fully compatible with our API.

Searching for the better abstraction

Ok, let’s take a step back and figure out what’s the issue with the way we write tests in general.

Let’s take a simple example — testing the classical Counter component. An Enzyme test could look like this:

it('can increment the counter', () => {
const wrapper = mount(<Counter />);
wrapper.find('.increment').simulate('click');
expectToBe(wrapper.find('.value').text(), "1");
});

Looking at this code, there might be some code smells that we sense, but we might not be able to put our finger on them right away.

But if we think from a Clean Code perspective, we might remember what the S from SOLID stands for: The single-responsibility principle. It mainly applies to OOP, but in a general form, it was stated by Uncle Bob like this:

Each software module should have one and only one reason to change

Coming back to our example, how many reasons to change are there for the test above? You can take a moment to think about it.

Photo by mari lezhava on Unsplash

So, how many did you find? If you found at least two, good job! I would argue that there are actually 3 reasons to change:

  1. App specifications - the most obvious one. For example, the new requirement could be: “Pressing the button should increment by 10”.
  2. Component details, such as CSS class names (e.g. the button will have .inc class instead of .increment).
  3. Enzyme API. What if instead of wrapper.simulate('click') the API changes to wrapper.simulateClick()?

Looking at the reasons above, we realise that only the first one is legitimate. We can get rid of the second reason, by using RTL or a similar API:

it('can increment the counter', () => {
render(<Counter />);
fireEvent.click(getByText('+'));
expect(getByTestId("value")).toHaveTextContent("1");
});

But the code above still needs to change if the RTL API changes (e.g. fireEvent.clickByText('+’)).

The implicit abstraction

The main issue then is that we’re using an implicit abstraction. What I mean by that is that we have some app specs on one hand; and we write some low-level code (Enzyme/RTL) to implement those specs, on the other hand. And we keep a mental mapping between the two. So we have an abstraction, but it’s implicit, it’s only in our mind.

The implicit abstraction

We can try to make it explicit, by adding comments:

// Increment counter
wrapper.find('.increment').simulate('click');

But, why do that, when we can do even better: actually implement the implicit abstraction. And we can do that in terms of the test intent, not the underlying implementation, as we did before with the parallel abstraction (the “scene” API). We tried to fill the gap with that library, but we only did it partially — the mental mapping was still there:

The parallel abstraction

The test actor pattern

Since we want our tests to mimic the user interactions, we can write them with this purpose in mind. We can have a sort of a test “actor” as a substitute for the real user.

This is not a new concept, it already exists in the form of the Screenplay pattern. However, this pattern is currently applied mainly for high-level acceptance tests. But it looks like it is general enough to be used in tests at all levels. The inspiration for this post came from a great talk exploring this model, and its benefits.

Coming back to our example, we can introduce a counterActor:

it('can increment the counter', () => {
const actor = initCounterActor();
actor.increment();
expectToBe(actor.getCounter(), 1);
});

We’re basically writing our tests in a BDD style while making that explicit. Reading the test actually sounds like a User Story, and it could even be understood by a non-technical person.

So, we finally reached the goal of having only one reason to change in our test, and we now follow the single-responsibility principle.

This actor can have various implementations - Enzyme, RTL, etc. And we can easily swap them since the API remains the same:

Enzyme Actor
RTL Actor

Bonus

This model unlocks some new possibilities as well. By being implementation agnostic, it means that we can implement actors in any way we want. And at any level of our application. For example, we might be using Redux for our state management, and Puppeteer for acceptance tests. Then, we can run the same tests using different actors:

Same test with different actors

Actually, this is the main selling point of the talk I mentioned earlier - the somewhat misleading title “sub-second acceptance tests”. Basically running user acceptance tests at the lowest level, where tests are very fast.

Conclusion

While this new approach presents obvious benefits, there are some trade-offs as well, of course. Each abstraction introduces an indirection, which might make the code slightly harder to navigate. Also, a mindset change is required - the tests will need to be written and named to be more user-centric. And developers should be more disciplined when writing tests - they need to better separate what belongs to the test, and what goes into the actor.

Having said that, the benefits you get from using this pattern might as well justify the challenges it takes to adopt it. We’ve tried it in a few real-life tests, and I must say it felt somehow liberating. And it even made testing kind of fun.

--

--

Bogdan Zaharia
Hootsuite Engineering

Staff Developer @Hootsuite. Passionate about clean code & architecture.