Adding observability to your Webdriver.io / Cucumber JS end-to-end tests on the CI

Bernardo Guerreiro
Jul 3 · 6 min read

The project I’ve recently been involved in at DAZN aims at enriching the viewer’s experience by providing them with Key Moments in the game, plotted in the scrub bar so users can navigate to them easily. As a result, our End-to-End tests require us interacting with the video player, which comes with a certain level of potential flakiness (due, for example, to the number of network requests necessary to maintain a video stream).

I’ve recently spent some time trying to reduce that flakiness, so that I can confidently enable all these tests on the CI. It was quite confusing in the beginning, if I’m being honest, because it was often extremely hard to debug what was happening on the CI system. Especially because, more often than not, the problem was either not consistently happening, or would not happen when ran locally. Sound familiar?

I’d like to share some of the things I implemented in order to introduce observability to these tests, because I think it’s a pain that a lot of people go through.

First, our testing stack:

Note that, while our examples are tied to these tools, the principles here are the same and can be applied elsewhere. Other test frameworks, runners and reporters also support the concept of attachments, for example.


Adding Screenshots on Failure

The first, and most obvious step, is to take a screenshot on failure. However, we were only getting screenshots on certain types of failures, due to the interaction between Webdriver.io and Cucumber.

Some of our teams introduced a custom assert that took a screenshot whenever an assert failed. While that does work, I preferred to have the flexibility of using chai’s assertion library. Instead, I implemented a cucumber hook that gets run after each scenario execution. If, at any point, the scenario fails (for whatever reason, timeouts included), then it will take a screenshot. Here’s what it looks like:

This is all you need, as your screenshot will automatically be tied to your allure report.

Note that sometimes it might be interesting to save screenshots even on success for a while, until you are confident that your tests are not returning false positives, and that they are consistent. It’s important to know what “normal” looks like for your tests too, and screenshots help with that.

Write debuggable code: Try-catch on sensitive parts of the tests

Sometimes, you may begin to notice that tests fail on specific steps, which seems to indicate some of the logic in those steps may be suffering from some flakiness. The problem is the log trace provided by either cucumber or wdio is not always useful/easy to follow.

In general, it’s a good idea to wrap parts like this in a try-catch statement. This allows you to throw a more meaningful error that might be easier to debug.

Let’s suppose, for example, that you are making an API call to your backend in order to sign in your user. If you don’t wrap that in a try-catch, perhaps your test will continue and fail on a timeout somewhere, and you won’t have visibility as to what actually happened. Here’s an example of doing it:

Adding that error there allows us to easily trace back to where the problem actually happened in our code. Similarly, with any methods you are using to do things outside of the Webdriver.io commands (which handle their own errors), it might be a good idea to wrap them in a try-catch!

Implement dedicated logger

As we began to run these tests in parallel, it was obvious that certain console.log’s we had in the tests (for debug purposes) quickly became less useful, since it was too confusing to follow with multiple tests logging in parallel, especially on the CI.

To solve this, I implemented a dedicated logger that I can call instead of console.log. This was quite simple to do, as we don’t need something very complex for our purpose:

Additionally, I allow it to just console.log the message (with a timestamp before it) if we pass a DEBUG=true variable to the test, in case we want to see it from the console (locally, for example).

In another hook, I use the browser object to save this function in, as well as the logs, since it has session awareness (it is unique for each browser session — ie, for each scenario):

Now, you simply call browser.saveLog instead of console.log, and it will save a timestamped version of the log in the testLogs object.

Then, I just make sure to attach these logs to the reporter:

Adding Browser Console Logs (Chrome only)

Our end goal with many of these tests is that they are maintainable not just by Test Engineers, but by developers in general. That’s a big part of caring so much about debuggability, especially since developers may not be as familiar with Selenium or Webdriver.io.

I think one of the main things to do to move toward that goal is to add the browser’s console logs to the tests. This gives everyone visibility of any errors that might have happened at an application level, which will often be either legitimate bugs, or network/library related issues that may be impacting test execution. These are problems that are often outside the scope of Webdriver.io to catch, and sometimes create hard-to-debug errors that point you in the wrong direction if you were to just use the Webdriver.io/Cucumber logs. The browser logs should often give you more information to pinpoint the exact root cause!

It’s (afaik) only possible to do it for Chrome, but it’s already quite good to at least have it for chrome. Attaching these is quite simple:

Adding Network Request Logs (Chrome only)

Similarly, you may need to go more in-depth on a network level. In our use case, we rely on WebSockets to communicate with our backend services (we mock it in the tests using a mock ws server to test the frontend), so inspecting network logs involves not just HTTP requests but also WS.

First, we need to install the wdio-devtools-service, and then require it in the services in your wdio config (services: [‘selenium-standalone’, ‘devtools’]).

In order to achieve what we want, we will need to setup event listeners using the Chrome Dev Tools protocol, to record the things we are interested in. We do that in another Before hook. Here’s what that looks like:

We first enable the Network Domain by using browser.cdp, and then setup an event listener for responseReceived and websocketFrameReceived. Note that responseReceived doesn’t capture the response body (probably for performance reasons), so you would need to actually use another protocol method (Network.getResponseBody) to get that. However, it does provide a lot of useful information already.

Then, we must once again attach this to our reporter:


The final result:

In closing, I’d like to emphasise that after implementing all of these, I was able to quickly debug the failures I was getting and work towards fixing them. I think they will be invaluable going forward, when people other than myself begin to write and debug these tests.

If you are writing end-to-end tests for your project, I highly encourage you to start considering the observability of those tests (in addition to the system itself), as this will be a key factor in how useful they actually are. This is a good step toward building confidence in your automation systems.

After all, no one likes flaky tests, but what people truly hate are flaky tests that are too difficult to debug.

DAZN Engineering

Revolutionising the sport industry

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade