Time those functional tests with Timings API — Part 2

Dwane Debono
TestAutonation
Published in
6 min readApr 1, 2022
Timings Main Dashboard

Onto Part 2

In my last post in which I introduced the Timings API and briefly described the ELK stack, we concluded at just before applying Timings to our previously created functional tests. In this part, we will modify those tests to start asserting them against some necessary performance checks!

First off, we need to understand the different Timings functions that are available for us to use. One of these is the’/apitiming’, which we can use to assert the performance of API calls. We will not get into this endpoint since the focus of our post is further up the stack, on the functional browser tests. So let’s move on to explaining the two primary endpoints which we will use in our tests: ‘/navtiming’ and ‘/usertiming’

Applying it to our tests

We will use the same basic test framework and test scenarios that have been used for our previous WebdriverIO posts, and extend on these test scenarios as necessary. We will be using this repo for the code examples in this post. The first action we have to take is to switch to the timings-docker repo and run the docker-compose setup that we discussed in Part 1. To make sure everything is up, open three browser tabs and navigate to each respective service that should be up, i.e.

Obviously, these URLs depend on where you are hosting your setup, which you can easily configure through a config file. You can find more info about creating your config file from the timings-docker Github repo.

The next step would be that of installing and adding the timings-client npm package to our package.json so that its added as a dependency. To do this just run this command within the root of our WebdriverIO tests repo:

npm install --save-dev timings-client-js

We will modify and start monitoring our tests within /test/WebdriverIO_site/navigationalLinksTest.js, so to do this we need to add these two constants to be able to use the API:

const timings = require('timings-client-js'); const perf = new timings.PUtils('timings.conf.js');

Now that we have all the necessary setup, we can proceed to modify each test scenario. In each ‘it’ scenario, we can actually overwrite API parameters such as the expected page load time for that particular test. To do this, add the following line at the beginning of each test:

const perfParams = perf.getApiParams({});

To overwrite the default settings, you can use it such as:

const perfParams = perf.getApiParams( { "sla": {"visualCompleteTime": 2000} } )

Async/Await

The first significant change that we need to do to our existing tests is to convert them to use NodeJS async/await functionality to enforce the test’s synchronous flow. Even though WebdriverIO is run synchronously through the use of the wdio-sync package and enforced with the parameter ‘sync:true’ within wdio.conf.js, some code execution can still have issues. Therefore to prevent that, change the declaration of the function as follows:

it('should go to Developer Guide page when choosing Developer Guide link', async function () { ...

Now this will enable us to use the await call in front of methods which we need to execute in synchronous mode.

Navtiming vs UserTiming

To proceed with modifying the tests, we need to understand how each scenario is interacting with the site. This will help us know which Timings API call to execute.

The first scenario is “it(‘should go to the Developer Guide page when choosing Developer Guide link’)”. This scenario checks that when a user lands on the Webdriver.io homepage and clicks on the Developer Guide menu item, arrives on the Developer Guide page. If we inspect the page to open the Chrome Dev Tools and go to the Network tab, we can check if the link causes a full page load or essentially loads data already stored with the current page. In our case, all links from the top navigational bar cause a full page load. As instructed in the Timings documentation, ‘/navtiming’ should be used for full page loads. So to use this call, we will add the following code:

const injectJs = await perf.getInjectJS('navtiming', 'visual_complete', true); const injectCode = injectJs.data.inject_code;

To get the performance data from the browser we need to store the response to our browser actions. Therefore, we need to add a constant and chain the actions performed on the browser object such as:

const injectCodeResponse = await browser .url('http://webdriver.io/') .waitForVisible('[alt="WebdriverIO"]') .click('=Developer Guide') .isVisible('[id="Developer-Guide"]') .execute('window.performance.mark("visual_complete");') .execute(decodeURIComponent(injectCode));

We can now continue with the actual functional assertion which we previously defined in our test:

assert.equal(await browser.getUrl(), 'http://webdriver.io/guide.html');

Perfect! Now we can grab the browser response as this has the performance data:

const injectCodeResponseValue = injectCodeResponse.value;

To send the performance data to the API, we need to call the endpoint with the following command:

const navtimingResponse = await perf.navtiming(injectCodeResponseValue, perfParams);

Now that we also have the return object from the ‘ perf.navtiming’ call, we can check the assertion against our SLA configuration:

if (navtimingResponse.data) { const apiResponse = navtimingResponse.data; expect(apiResponse.assert, 'Performance failed! assert field is False \nNavtiming: '+JSON.stringify(apiResponse.export.perf, null, 2)).to.be.true; } else { console.error('API error: ' + JSON.stringify(navtimingResponse, null, 2)); }

That’s it! This way we can make sure that our test not only asserts its functional assertion but also the performance threshold.

So where should we use /UserTiming?

To show an example of ‘/usertiming’, I will create a new test against the site https://reactjs.org/. Therefore, I will now structure the tests folder so that there is a folder WebdriverIO_site and another one called ReactJS_site so that we can distinguish between each target site for our tests. For WebdriverIO to be able to execute these tests within a different path, we need to go to the wdio.conf.js file and change the pattern defined under specs to ‘./test/*/*.js

Great! Now we can create the new test. This test will consist of another simple navigational test. The user should go to the Getting Started page when he clicks on the ‘Docs’ link on top. The only few differences in the implementation are easily detected. First off, the ‘ perf.getInjectJS’ call will now have ‘ usertiming ‘ as its first parameter value while the second parameter can now be set to an empty string. Since the page won’t reload with the action performed by the user, we will set a ‘start’ and ‘stop’ mark to measure the duration of the test. You can have a look below at the full implementation of the test:

const assert = require('assert');
const timings = require('timings-client-js');
const perf = new timings.PUtils('timings.conf.js');

describe('ReactJS Navigation Links', async function () {
it('should go to the Getting Started page when choosing Docs link', async function () {

const perfParams = perf.getApiParams({sla: {pageLoadTime: 5000}}); //you can overwrite values in perf.js
const injectJs = await perf.getInjectJS('usertiming', '', true); //Request inject code from API - `true` = strip querystring
const injectCode = injectJs.data.inject_code;
const injectCodeResponse = await browser
.url('https://reactjs.org/')
.execute('performance.mark("demo_start");') // Set User Timing "start" mark
.click('=Docs')
.isVisible('Getting Started')
.execute('performance.mark("demo_stop");') // Set User Timing "stop" mark
.execute(decodeURIComponent(injectCode)); // Inject JS code into browser object

assert.equal(await browser.getUrl(), 'https://reactjs.org/docs/getting-started.html');

const injectCodeResponseValue = injectCodeResponse.value; // Grab the browser's response - has the perf data!
const usertimingResponse = await perf.usertiming(injectCodeResponseValue, perfParams); // Send perf data to API

if (usertimingResponse.data) {
const apiResponse = usertimingResponse.data; // Grab the API's response - has the assert field!
expect(apiResponse.assert, 'Performance failed! assert field is False \nNavtiming: ' + JSON.stringify(apiResponse.export.perf, null, 2)).to.be.true; // Assert the result!
} else {
console.error('API error: ' + JSON.stringify(usertimingResponse, null, 2));
}

});


});

The Result

Now we can run both specs by running npm run test in the terminal. If we have set the configuration and tests correctly, you will see five passed tests from two specs. As for further information on the performance results, the Timings Dashboard found in your Kibana URL will display details on the outcome of all the test runs. Under the Visualize menu in Kibana, you will find loads of helpful visualizations such as SLA vs Actual and NavTiming metrics line charts which will show trends in your tests and progress over time. These metrics can be highly significant when we need to detect slowness with upcoming versions of the application. A shift-left in performance testing will prevent the deployment of a newer version of the app with performance issues.

Originally published at https://testautonation.com on 08/2018.

--

--