End-to-end testing on a React-Redux app

I struggled for some time around the end-to-end testing setup in my React-Redux app. I came out with some ideas and a workflow that I would like to share. Here is how it works.

As developers we only have one way to be certain that our code works, we test it. Since it’s very likely that we will make mistakes, testing comes in our help to ensure that we do not end up pushing a broken application in the hand of our users.

Thousands of blog posts has been written around unit testing React components, Redux reducers, action creators or sagas. But when I started documenting myself around automated end-to-end testing I couldn’t find a solution used by most of the people in the React community. Not that it was needed, after all there are plenty of tools and techniques generic enough to be used in multiple languages/frameworks.

In this post I am going to explain the workflow I adopted to make end-to-end tests on my React-Redux app, which in the same way, it’s actually not specific to React-Redux SPAs, but works with them.

Before explaining it, let’s take a step back and talk a little about end-to-end testing.

End-to-end testing is a methodology used to test whether the flow of an application is performing as designed from start to finish. The purpose of carrying out end-to-end tests is to identify system dependencies and to ensure that the right information is passed between various components and systems.

The idea of end-to-end tests consists into driving the user interface of an application through an automated system that acts as the user, to verify that it returns the expected results.
The value of this tests is in my opinion high considering they are simulating the user interactions, but unfortunately end-to-end tests in general have quite some problems:

  1. They are usually really slow: loading the full UI and navigating as the user is never as fast as testing functions in unit.
  2. They suffer from direct coupling with the user interface itself, which means they are more likely to break when the UI changes. Some techniques reduces this coupling (for example page objects), while making them at the same time more complicated.
  3. They are usually expensive to write, unless you record them with some tool. My experience tells me that recorded tests are a pain to maintain and under performant.
  4. Historically they suffer from non-determinism problems, due to timeouts, slow responses or other reasons, even if now tools are getting better at handling this kind of things.

For all this reasons the world famous testing pyramid by Mike Cohn is still something I religiously follow. End-to-end tests sits definitely on top.

What I test and how

Considering the previous points and the testing pyramid, I defined some general rules for testing my React-Redux frontend application.

I cover with end-to-end tests only the happy paths of the application, the critical and successful actions a user is allowed to make (some exceptions are accepted for particular cases).

Also, I decided to to run real end to end tests, going deep into the application stack and involving the API layer.

Mocking the API is not an option for me in this case. If you consider your frontend as “the application under testing” and everything else just some dependency you have to deal with, I understand you might have a different position. But with end-to-end testing I meant to get more business value from the resulting test suite. It should demonstrates me that the user has a set of features at his disposal and that these features are working fine. Mocking the API does guarantee that the frontend is working fine, but the integration between frontend and API might still be broken.
I do not ignore that using the real API layer means slower tests and that will impact on the build times. I also do not ignore that the testing environment is more difficult to setup, but I believe so much in the increased business value that I am willing to accept the consequent challenges.

That doesn’t mean that you should blindly adopt the ideas above. I believe that writing tests should not be the practice of complying to a set of dogmatic rules. There should be no “right” or “wrong” way. There are better and worse solution, but better or worse depends on the case.
Understanding what value a test gives, related to the cost of writing it, the cost of maintaining it, the user needs, the criticality of the feature under test and the existing tests coverage, is way more important.

The test runner

While the number of tools available for end-to-end tests is huge, I had the chance to try cypress at the beginning of this year and it really made the difference.

It’s an open source test runner that works great with applications like SPAs. It provides a debuggable user interface that significantly shorten the debugging time, giving you the list of commands executed by your tests while they run, together with a live preview. Cypress has a time-travel feature, live reload built-in, automatic waiting for commands and assertions (that works) and all you expect from a modern testing framework. You can take a quick look at their features page to learn more.

In my experience, the testing setup was straightforward. Tests turns out to be pretty reliable when run in headless mode, a little less reliable from the user interface but that’s expected, since the UI is recommended as a development tool but not to run all your tests together. Writing tests is easy (it’s also very nice that ES2015 is baked in without the classic Javascript-setup nightmare) and at the same time they end up being maintainable, which was not obvious considering how often in testing “easy to write” translates into “a hell to maintain”.

After finding the test runner for my end-to-end tests, I was still half the way to make them work as I wanted.
Since my target was to run tests in an environment that was as close as possible as the one provided to the user, I also needed to find a solution to manage the state between tests.

The “state” issue

One of the principle for effective testing is to always start from a clean state, no test should have any dependency on the previous. This is a good suggestion if we want to have independent tests that, in case of failure, do not make other tests fail.

My application depended on some state (data stored in a database), for this reason I had to find a way to prepare and clean up the database on every test execution.
The data is exposed through a REST API with a Rails application. Rails itself has a great set of gems that you can use to run end-to-end tests and perform state preparing and state cleaning, but I didn’t want to couple the code of our end-to-end tests with the Rails application itself since my application was not the only frontend served by the API.

I tried to find an alternative solution that allowed me to manage the state between tests without coupling tests to the API application, and I found a little gem called Hangar
When included into your API codebase (only change needed in the APIs), strictly between the gems available only in test mode, Hangar leverages on the factories defined with FactoryGirl to expose for each of them a new API endpoint: POST /factory (for completeness, a GET /factory/new endpoint is also exposed but it wasn’t useful for me).
With the POST endpoint I can easily generate data that will be seeded in the database in the prepare/before step of any test. Moreover, this data is randomly generated by FactoryGirl factories, but can be easily overridden just providing the new values in the body of the POST request.

I wrote some small cypress commands to perform this task, like the following one. Note that Hangar endpoints require some a special header.

Cypress.Commands.add(‘seedUser’, (user, traits) =>
cy.request({
method: ‘POST’,
url: ‘/api/users’,
headers: {
‘Content-Type’: ‘application/json’,
Accept: ‘application/json; charset=utf-8’,
Factory: ‘hangar’
},
body: {
user,
traits: traits ? traits : [],
}
}).its(“body”)
);

At that point I was missing only a way to clean the state after each test. Hangar cover this for me exposing another global endpoint, DELETE / , that performs a database clean.

The setup was completed, and I could finally start writing my end-to-end tests.

Conclusion

This setup works really well. Cypress makes testing easy to write and easy to debug. Seeding and cleaning the state with Hangar is fast and makes tests more reliable, excluding any dependency between them.

Unfortunately there are certain cases when you are definitely more limited than I was. If the layer providing the state to your application is not under your control, you can rely on mocking API responses (which in my opinion reduces the value of this kind of testing), or you can just skip the seed and clean stages of the application state. Skip the seed and you need to execute many more setup steps with the UI in your tests, avoid the clean and you make tests dependent between themselves (or clean using the UI when possible adding more steps).
If you have the API under your control but it’s not written in Rails, you cannot leverage on Hangar so you miss a ready-to-use solution, but still, the fact that this approach works well for me proves that it is feasible in other languages/frameworks.

This post comes to an end, but I’m really interested in hearing your voice about this topic. Feel free to tell me what you think and, if you have alternatives, I’d be thankful if you share them :-) .

If you like this post, subscribe to this publication on Medium to get all the updates and stay tuned. You can also click “Follow” below, next to my account to receive email updates from Medium.
And don’t forget to say “hello” on Twitter, my account is @darioghilardi!