Do your integration tests have to be slow and painful to maintain?

Wiktor Mociun
Planet Arkency
Published in
4 min readMay 30, 2016

--

It is important for us to make sure all crucial processes of our business works. The most popular way of doing that are integration tests.

We launch test server and some test framework (like Selenium). Then the test framework navigates in some kind of web browser. Such tests are easy to create. They are rather reliable. If they pass, we can be pretty sure everything is fine.

Where is the problem?

But, these tests are usually a real pain in the ass. They run on CI server for ages, they often have a tendency to fail randomly (btw. we have methods for it).

I noticed that we often want to be sure that every crucial process run flawlessly. So tests in Selenium comes as a natural choice. Few months of development passes and you can be sure that those tests run for over 10 minutes.

Over 10 minutes waiting to be sure that your backend and frontend connects in expected way. This sounds just bad.

We can test our backend using some kind of unit tests. The same with a frontend. The true problem lies in the middle. There is a single purpose of writing an integration test. It is about being sure that both backend and frontend connect in a right way.

Ideal solution

The best solution we could have is to remove the whole need of using tools like Selenium.

  • No need for running computation-expensive processes.
  • No need for preparing huge app states.
  • No need to taking extra care of random fails.

Satisfying these requirements seems like a lot. Let’s see how our solution for that problem works and if it solves those points.

Specifications

First, we need to prepare some test-case that will be used for both backend and frontend test.

The whole trick is about preparing some set of data that frontend will use in test and backend can generate. Here’s an example of our API controller output stored as JSON file:

{ "order": { "items": [ { "id": 1, "name": "Fearless Refactoring", "currency": "EUR", "price": 10000 } ] }

We call those files specifications. This does not need to be a JSON. It just needs to be a data formatted in a way both the backend and the frontend can read.

I used JSON API endpoint just for a sake of example. In one of our projects, we pass data between our Rails backend and javascript frontend using DOM attributes.

<div class="order-application" data-order="{ "order": { "items": [ { "id": 1, "name": "Fearless Refactoring", "currency": "EUR", "price": 10000 } ] }"></div>

This is the same data as above but just served in a different way. For this kind of data-transfer, we can format specification file like this:

{ 
"class": "order-application",
"data-order": { "order": { "items": [ { "id": 1, "name": "Fearless Refactoring", "currency": "EUR", "price": 10000 } ] }
}

How testing with specifications looks like?

It’s simple.

  1. Backend test — check if your backend outputs the same data as those in the specification file. For example, you use controller test for that.
  2. Frontend test — use specification file as a source of data for test-case. Check if everything works as expected on mocked browser environment (like jsdom).

Note: Alternatively to jsdom, you can try to use PhantomJS.

That’s it!

You just created an equivalent to standard Selenium test without using overloaded integration testing frameworks.

Let’s get back to our “Ideal solution” points:

  • No need for running computation-expensive processes.

We use simple unit testing frameworks to test backend and frontend separately. It will be more easy for your CPU than the process of running whole integration framework.

  • No need for preparing huge app states.

We usually don’t need to render the whole view to test-out single feature. It means we can skip on preparing some data that would be normally required when running integration test.

  • No need to taking extra care of random fails.

The common thing that may go wrong with integration test is a random fail. Something goes off too late and your whole build is ruined. As I mentioned earlier, we have some methods for it. But, it is still a frustrating issue.

In most cases, these problems are gone. This is what we expect. However there is still a single issue, we may need to keep in our mind.

Integration tests are not perfect — we need more

This kind of tests does not give us a full guarantee. But the same applies to integration testing with frameworks like Selenium. They both emulate our system in the artificial environment. We have no guarantee it will work on a real system.

We need to be sure that some feature works on production. This may be for example the ordering process. This kind of code is usually vital for the whole business.

My idea is to perform automated tests on a production environment. Completely separated from the testing CI infrastructure.
And it can be even performed using Selenium. ;)

Specifications are just a great layer of protection on CI build level. They are fast and gives you the similar level of protection as classical integration tests. But for the sake of being completely sure if everything is fine — we can run some automated checkups just after a deploy.

More on creating apps like a boss

In case you want to learn more about creating complex infrastructures and testing them, we are preparing series of short videos about using DDD patterns in practice. There are are also a lot of materials on testing. You can check it out here.

The base of videos is still growing and we are really happy with this content format.

--

--