7 tips to speed up your WebDriver tests
When you follow Continuous Integration, tests provide the feedback of whether the given set of changes you are integrating break something or not. A fast test pipeline helps in identifying bugs early and in turn those bugs can be fixed faster.
You generally have a test pipeline which is often split into multiple stages like Unit, Integration and UI/Functional tests. In a typical web application test pipeline, most time is spent in running the UI/functional tests. These tests start the entire application and interact with it in the browser. They are often written using WebDriver with wrappers specific to the language they are written in. These tests give more confidence about the state of the application. Because of their usefulness, they grow in number. But they are generally slower to run, which results in them contributing most to overall time taken.
I have worked on multiple web applications which have a huge number of UI/Functional tests and spent a lot of time optimising the test runtime. This post is an attempt to list down some steps which I’ve found useful in my experience. The last application I worked on used Rails, Capybara, Poltergeist and RSpec, so the examples are in that context. But the concepts would apply to any other language or platform that uses WebDriver.
1. Profile your tests
I have often seen people profiling their production code to find bottlenecks. So why not profile the test code?! This is the first step I would recommend doing.
This gives an idea of where most of the time is spent. You can then focus the effort on those areas which give big benefits with low effort.
It might surprise you with results of where time is spent, when it is in areas you least expect. For example, in one of the recent projects I worked on, most time is spent in loading the first page and user signin of the application. Or, it will help validate your hypothesis on where time is spent. For example, we guessed that most time is spent in setting up the test data, and the profiler data supported that hypothesis.
Following is an example for profiling an RSpec test suite. It uses the ruby-prof gem to do the profiling:
It starts the profiler before each test and stores the result after the test finishes.
2. Run in headless browser like PhantomJS
Headless browsers are browser simulation programs which don’t have a GUI. It is faster to run tests in headless browsers than running them in a real browser. Also, when you run the tests on CI tools, you don’t need to install the browsers there. It should be kept in mind that running tests in a headless browser can sometimes result in false negatives or false positives. Although, this is generally rare and the benefits of faster test runtime eclipse that concern.
A few such drivers I have used with Capybara are capybara-webkit and poltergeist.
3. Wait, don’t sleep
This is something I have see most often in multiple test suites. Following is an example of this anti-pattern:
The first sleep statement is added to wait for the login popup to appear. And the sleep statement will wait for the login to complete. The problem with such a sleep statement is irrespective of how fast your code is, it will take a minimum of 15 seconds for the login to complete.
The problem becomes even bigger when such a sleep statement exists in the before or after blocks, like in the example above, where every test requires to login to the application.
A better approach is to have a wait_until_assertion after the action which will result in some change in the page content. A wait_until_assertion would poll for the given assertion to be satisfied. It will also have a timeout by when the assertion should be satisfied.
In this case, there can be wait for the login popup to appear and a wait for the popup to disappear and username to appear in the header. Luckily, capybara matchers automatically deal with such waiting as mentioned in their documentation.
The above example can be improved as follows:
One of the most common reasons for putting sleep statements is when you find a flaky test and the addition of a sleep will ensure that the test passes consistently and you don’t have to spend a lot of time in finding the reason for the failure. But the problem with this is you don’t know what is a good amount sleep for and you generally tend to set it higher than required. Also, you don’t fix the root cause of the issue.
4. Monitor network requests
When profiling the test suite in my last project, I noticed a lot of time was spent in loading the home page which made me curious which network requests were taking a lot of time to load.
Then I found that the poltergeist driver has a network_traffic method which tells the network requests made and time it took to load them. This is a good post explaining how to use this feature.
One major issue I found with the help of this technique was that there were duplicate Ajax requests being made to the server which were slowing down the page load. This was due to the way the HTTP requests were made from AngularJS and fixing the duplicate Ajax requests improved the page load time and hence the test runtime.
This technique was further useful to identify what requests were not required in the test environment. For example, we did not need to load Google Analytics or Airbrake in tests. Or, we didn’t need images to be loaded from the CDN.
5. Don’t load images
This will help you reduce page load time if your application has large images. And since we are running in a headless browser and WebDriver tests check only the image url, it is not necessary to load them.
Following is an example of disabling images in Capybara using poltergeist.
6. Blacklist domains
You can blacklist domains in poltergeist so that any request made to that host will return 200 with empty response. This is useful when you don’t want certain functionality in your tests like Google Analytics, Airbrake integration etc.
This is a good post explaining the feature.
Note: I didn’t compare the test speed after blacklisting but noticed that the number of network requests per page had reduced.
7. Focus on user journeys, not pages
The concept of TestPyramid advises keeping UI/functional tests to a minimum and having more tests at the unit and service level. This can be done if you treat your UI/functional tests as a collection of user journeys and don’t test each scenario exhaustively but instead test the main user flows.
For example, if you are testing a user registration form, you might be tempted to test all validation checks like whether first/last name is provided, phone number is provided in the correct format etc. It is enough to have one UI level test which checks that all validation errors are shown when I submit an empty form accompanied by multiple unit tests which check each individual validation.
Delete all the high level duplicate tests which are already covered at lower levels.
Another useful check is to make sure no javascript errors are raised when you are testing the user journeys.