Executing UI tests can be an expensive exercise, not only do they require more time to create and maintain, but they also require more time to complete their execution. When I discovered the possibility of executing tests in parallel I wanted to know all about it. Running tests in parallel combined with the ability to run tests remotely delivers a noticeable improvement in the duration of an automated test run. In this post, I’ll share with you what I’ve learned and the results that we’ve seen.
In the Content Platform team at ASOS, UI and visual tests used to run sequentially in every project. Despite having the ability to trigger multiple Team City builds in parallel during a release, the time it took to run our automated tests to validate a change was still high. Looking at our Sauce Labs account we noticed that we weren’t making good use of the maximum number of browser concurrencies in our subscription. Aiming to minimise the time it took a release to complete and maximise the use of our Sauce Labs subscription, we started to look at the different options available to us.
In Nunit 3 the capability to execute tests in parallel was introduced — that combined with Specflow’s dependency injection would be the approach that we would end up implementing in our test projects. We were already using both Nunit and Specflow, so implementing parallelism was relatively straightforward.
Before you can start running tests in parallel using Specflow and Nunit, you require a .net project containing UI tests written in BDD format. Your project will need to reference the following libraries.
· Nunit 3
The structure of the solution that I’ll be using in this post is shown below.
Registering browser instance
The first step we need to take is to register our browser instance, using Specflow’s default IObjectContainer . To obtain an instance of the
IObjectContainerobject we add a constructor to our
DriverSetup.cs hook, passing the
IObjectContainertype as a parameter.
BeforeScenario we then proceed to register our browser in the instance of the
IObjectContainer we received from our constructor.
Now that our browser instance is registered, we need to make use of it. A good place to start is our
Hooks.cs class. To get our registered browser instance we create a constructor and add a parameter calling the
In the screenshot above, we can see that the browser instance injected to our hook class is then used to finalise our test in the
To this point we have a solution that initialises and finalises a browser but does not yet execute the tests in our solution.
We now need to turn our attention to our step definitions. In a similar fashion to our previous steps we add a constructor to our
TestStepDefinition.cs class passing the
IwebDrivertype as a parameter.
Specflow injects the browser instance into our steps definition class. From here we can pass our browser instance when initialising our Page Object classes to interact with a site.
ScenarioContext / FeatureContext injection
There are times when we need to share data between step definitions — we use ScenarioContext or FeatureContext. During a parallel test execution we must avoid the use of the static
ScenarioContex.Current accessor, instead we need to inject the current scenario context to our step definition.
The work we have done so far is to ensure there’s no interference between tests when enabling parallelism. At this stage our tests should successfully execute in a sequence.
Enabling parallel tests execution
Nunit is the tool that gives us the ability to run tests in parallel, to enable parallelism in our project we need to add the following line in the
AssemblyInfo.cs file in our project.
The ParallelScope enumeration is the one responsible for specifying how tests run in parallel. In this example I specified that I want Fixtures (feature files) to run in parallel. If you want to find out more about the different ParallelScope enumerations you can click here.
The default number of threads that Nunit runs in parallel is four. If we want to run a higher number of threads we need to add the following line in our
In the line above we specified that we want five threads to run at the same time.
Having implemented this approach we’ve seen the duration of our test run reduce by two-thirds. Before this change was implemented, a project containing 45 tests would take 12 minutes. After the change, the project would take just under four minutes to complete. We expect an even faster execution time once we start making use of the new data centre that Sauce Labs recently launched in the EU.
At ASOS the benefit of this change translates into a smaller release window that will reduce the downtime for our content editors.
By implementing this change, you might notice an increment in the execution time for each test but execution of the entire test suite will be shorter. Make sure that you experiment with the number of threads to find a level of parallelism for your project and your team.
The results you get will vary depending on what your tests complete and how they are organised. I recommend creating atomic tests and not overloading feature files with scenarios.
You can find the solution that I used for this article on Github.
Johan Escobar is a QA Engineer at ASOS working in the Content Platform team (CPT). His interests include pasta-making, hiking and most recently baking.