Effectively Reducing the Run Time of Automated UI Tests

A framework neutral guide on reducing test execution time.

Christian Nissen
7 min readMar 14, 2023
Photo: BOOM 💥

Regardless of which test automation framework you use, be it Cypress, Playwright, Selenium or WebDriver.IO, test execution run time begins to become an important factor in software development pipelines.

The ”shift left“ initiative has put more pressure on tests to deliver fast feedback. While the classical “test after” approach at least allowed for a longer testing period, the integration of tests as fast feedback loops within development leaves tests with short time windows.

In contrast, a high regression test coverage obviously requires a large amount of test cases. Typically companies implement organisational “solutions”. Long run times are often mitigated by scheduling tests to run at night or on weekends, i.e. outside of office hours. The obvious drawback is that results are not directly linked to changes made during office hours as well as presenting a slow feedback loop. This entails time-consuming debugging. So how can the run time actually be reduced, instead of shifting it and having to accept the before-mentioned disadvantages?

There are in fact several possible ways to reduce the run time of automated regression tests. However each also has potential constraints that need to be understood.

  1. Parallelisation
  2. Applying dynamic waits
  3. Using positive assertions
  4. Combining test cases
  5. Avoiding browser tear down
  6. Choice of browser and configuration
  7. Usage of back doors
  8. Re-using stored sessions
  9. Configuring a failure threshold

Considering the following architecture of an automated test case, the potential improvements above can be allocated to respective phases.

* Parallelisation does not reduce the execution time of one single test case, but affects the total time when running multiple test cases.

Parallelising test case execution

Running all tests in parallel reduces the total test execution time to the run time of the longest test of the test suite. However, this significant reduction comes at a high cost. For each test run in parallel a browser session is required. This means that if you want to run 200 tests in parallel you need 200 browser sessions to run in parallel. The necessary resources should not be underestimated. As the resources are not continuously needed you should contemplate allocating these resources temporarily to avoid the costs of providing environments that are idle most of the time. If you are not able to provide the necessary infrastructure setup on-premise there are plenty of service providers to choose from.

Using dynamic instead of static waits

Static waits are bad practice and should always be avoided. But if you go through your test code now, I am sure you will find some anyway. By replacing static waits with dynamic waits, the run time of tests, that depend on slow components, can be reduced. If, for example, you have introduced a static wait of 3 seconds to give an external asynchronous component time to appear before continuing a test, replace it with a dynamic wait that continuously checks for the component to be present in the DOM. In case you have implemented such a static wait in a page object, the resulting reduction in execution time can be quite significant. If you ask yourself why the static wait was implemented at all then I can give you a simple explanation; implementing a static wait is much easier and „faster“ than implementing a dynamic wait. But the result is worth the effort.

Using positive assertions

Using positive assertions instead of negative assertions will also reduce test execution time significantly. Imagine you are automating a test case which verifies the possibility of removing the last item from the cart in an e-commerce store. You will want to assert that no products are displayed in the cart. If you do exactly that and assert that a certain element is not present, the underlying test automation framework will most likely run into a timeout while waiting for the element to be present. Instead, try asserting the presence of the element that is present in the empty cart. This way the timeout is avoided and the test case execution time is faster.

Combining test cases

Combining test cases in test scripts has a positive effect on the test runtime due to several reasons. First of all the time to initialize the browser is quite significant. By combining two test cases in one test script, this is halved in total. This holds for all setup related tasks and is also true regarding the subsequent tear down actions. Furthermore all kinds of preconditions that are shared by several test cases only need to be fulfilled once.

All this comes at the cost of granularity. A test script that fails to add a product to a cart will not be able to continue and remove that product from the cart in the subsequent step. Thus, a failing test script will lead to subsequent skipped tests without results.

Keep in mind that parallelisation together with the combination of test cases will decrease the potential reduction, due to the longer run time per test script.

Avoiding browser tear down

As mentioned in the previous section the time needed to initialize a browser cannot be ignored. By keeping the browser open and executing multiple test cases in sequence, the overall execution time can be reduced significantly. To fulfill the best practice of having independent test cases, the browser’s session should be reset in between test cases. The advantage over combining test cases is the maintained granularity. Each test case continues to be independently executable, delivering it’s own result, without relying on the successful execution of a preceding test.

Choice of browser and configuration

Browser initialisation time as well as general speed currently differs quite a bit between the available browser applications, such as Chrome, Firefox, Safari, Edge and others. Thus the choice of browser used for testing also has a notable effect on the test execution time. In addition options such as headless mode further accelerate the initialisation time. But keep in mind that some options will also affect the browser’s behavior, which might have an impact on the test result.

Source: https://www.cloudwards.net/fastest-browser/

This comparison is based on the current versions of the named browsers, which are continuously further developed and improved. To gain appropriate insight, browser initialization time should be compared periodically.

Note: The choice of browser will obviously also depend on your cross-browser testing strategy.

Usage of back doors to fulfill preconditions

The usage of back doors for the fulfillment of preconditions circumvents using a browser. Back doors, directly accessing APIs provided by the backend, allow the execution of actions necessary to successfully execute a test case. If, for example, you want to test the functionality of removing a product from a cart in an e-commerce application, you will need a pre-filled cart as a precondition. By calling the respective backend endpoint, the cart can be filled without having to browse to a product page and clicking on the ‘add to cart’ button. The API will in most cases respond within a few hundred milliseconds, whereas browsing to a page and interacting with it will most likely take a couple of seconds. In some cases it is not even necessary to execute an additional step. Instead, the cart page can be opened with a set of parameters defining the products to be added. The result is the displayed cart, filled with given products, ready to be tested. In this case an existing functionality of the application is used to fulfill the required preconditions. This being a short cut, rather than a back door.

Re-using stored sessions

If several test cases require the same set of preconditions to be fulfilled, they can be stored and reused. For example, typically many tests require an initial login. Instead of executing the login within the “arrange” phase, a previously stored session, containing a successfully executed login, can be reused. Due to the fact that the session is stored within the browser, this strategy is independent of the test framework being used, as long as the browser’s session storage can be accessed. Following this strategy, merely the first test case requiring a login will actually perform the login and each subsequent test case will be able to re-use it, thereby reducing the execution time.

Configuring a failure threshold

Thanks to my colleague Tobias Erben for pointing out that setting a failure threshold also improves the feedback speed immensely in case of major failure. By configuring a relatively low threshold the test run is aborted when a certain number of test cases have failed and the execution of further tests would not provide any additional value. Moreover this has a positive impact on the consumption of resources by avoiding superfluous usage of processing power.

Side note: Although the presented approaches to accelerate the test run time are framework neutral, the framework itself also has a significant impact. However the choice of framework is influenced by many more factors. Existing knowledge, available support, required infrastructure, etc. are equally important, making execution speed only one of many points to take into account, when choosing the “right” framework.

Wrap up

There are many possibilities to significantly reduce UI regression test execution time. However, each comes at a cost which should be contemplated before blindly implementing them. Finding the right combination, and applying it with thought can in fact successfully reduce the run time of an extensive test suite from a few hours to a few minutes, thus allowing a full regression test to be integrated as a fast feedback loop into a continuous testing strategy. That said, do keep in mind that actually reducing the number of tests being executed obviously drastically reduces the overall run time. So, even if it sounds hard, question the value of each test in your extensive test suites.

--

--

Christian Nissen

Born and raised in beautiful Düsseldorf, grew up with a C64 and studied Computer Science and Communication Engineering when discovering a passion for QA.