Time those functional tests with Timings API — Part 1

Dwane Debono
TestAutonation
Published in
4 min readApr 1, 2022

Introduction

In my earlier post which discussed the topic of who owns quality within cross-functional agile teams, I mentioned that there are various factors that we should consider for testing. Performance is a non-functional aspect which testers rarely have time to cover. It can already be quite challenging to satisfy a substantial amount of functional testing with both manual and automated efforts. What if I tell you that you can add a layer of performance checks with minimal effort? I present to you, Timings.

Last October, I had the opportunity to attend the SeleniumConf testing conference in Berlin. One of the sessions was about the inclusion of performance assertions to standard functional testing. In this session, Marcel Verkerk introduced Timings, with the concept of adding performance monitoring to existing functional test suites. These tests can be functional end-to-end browser automation as well as API tests. In this post, I will explain how to locally set up the necessary tools to be able to make use of the Timings library and visualise the performance results of your tests using ELK.

What is Timings?

Timings is an open source npm module which provides you with an API which offers the functionality for communicating with the W3C Performance API available in all major browsers. This API within the browser collects timestamps for all the events within the browser, allowing you to calculate metrics such as the page load time and the Time to First Byte. You did not know about it? Go to your browser’s console and type performance.timing. With this communication between the Timings API and the W3C Performance API, you can then measure and assert the performance of your tests.

ELK who?

The Timings API is capable of transferring results to the ELK stack. ELK stands for ElasticSearch, Logstash, and Kibana. These three open-source tools allow you to transform, store and visualise logs without spending a dime on proprietary tools. This is essential so that we can view and compare the values from multiple test runs, especially with different system builds.

In simple terms, this Application Logs Management tool can concatenate multiple sources of logs, transform them for storing in the same database, and then visualise. There is usually no consistency between different generated logs. Therefore issues such as different time formats make it hard to amalgamate separate logs together. We can say that this process is done by Logstash and Elasticsearch. Another important fact is that logs can contain all sorts of information, and not just server uptime or response times. We can get information such as how many requests were done per day, or how many purchases were completed this month. However, logs are not usually available to the persons for whom they would be most valuable. The ELK stack helps in transforming this information to visualise it through Kibana dashboards efficiently.

Let’s get it running!

Marcel Verkerk has made it very easy for us to set up a local instance of the ELK stack. The primary requirement for this would be the installation of Docker, for which you can also find installation requirements and description within Marcel’s repo readme (https://github.com/Verkurkie/timings-docker). Once you have cloned the repo and created a custom config based on your needs, you are now ready to go! Run docker-compose up within the repo, and this will start loading up Elasticsearch, Kibana and the Timings API. Once up, you can check if they are up and running by checking the below links for each container.

So, how to use this ‘Timings’ anyway?

The idea is that you can add a layer of performance measurements with an existing test suite of functional verification. This allows testers to safeguard the performance of a system by blocking builds that would degrade the user experience.

As you can expect, we do not achieve the goal just by getting the performance data. We have to set up monitoring dashboards and automated notifications in order not to ignore this additional information. Therefore, when planning to gather performance results, make sure also to plan who will be responsible for it.

To first gather this data, we need to add some calls to our existing browser tests. With every page load or user action within the test, we need to inject some code allowing us to then retrieve the performance metrics. Then, depending if the measured interaction results in a full page load or a soft page load, different calls (navtiming or usertiming API call) need to be used. If tests are setup correctly, you can end up gathering some useful data which out of the box will show up on a dashboard similar to the one shown above.

…to be continued

That’s it for part 1 of my ‘Timings’ post, in which I went through a high-level explanation of how one can use this library to make use of the browser’s performance API with an ELK stack. In part 2 I will go into detail of how we can implement the Timings calls within the WebdriverIO tests created in my previous .

Originally published at https://testautonation.com on 05/2018.

--

--