Our approach on ‘Automated’ regression testing

Comic Relief Engineering
Comic Relief Technology
4 min readAug 21, 2017

Recently we reviewed our current approach for automated regression testing for one of our products here at Comic Relief. In this post I will discuss about our new approach for automated regression testing for Giving pages project.

What is Regression testing?

Regression testing is the process of verifying that the software still performs correctly after changes were introduced. This type of testing ensures any new functionality added to the existing system or modifications made or the bug fixes have not adversely affected what was previously tested.

Regression testing can be conducted by manual process and by using automation tools. Manual regression testing in Agile software projects is always challenging as the agile team focus will be delivering working software frequently.

Automated Regression testing

Automated regression testing is essential in Agile software projects. Automated regression tests can be part of Continuous Integration (CI) and Continuous Deployment (CD) and can be run any time, bundled with the build, deployed to any test stack etc. It is also important to understand what to automate and what not to automate in a project, the good approach is to choose most repeatable tests and happy paths.

Regression testing for Giving pages

For Giving pages project we use CI and CD pipelines and it is essential to have stable automated regression testing in place. For this project we have automated regression tests built using PHP Behat Mink and are integrated into the CI pipeline which are automatically triggered for every commit. Also we have scheduled jobs to run the tests overnight.

Our current automated regression tests for Giving Pages contains happy path scenarios for different Registration journeys, Giving pages, sponsorship journeys, field validation and copy. This is causing different issues like false test failures, tests failed due to third party systems like payment sandbox issues, running unnecessary regression tests for changes in different code bases etc, which is causing delays in the product release.

So we have decided to change our approach and refactor current regression suites to improve the release process.

The New Approach

As a first step we came up with the following questions

1. What is the aim of our regression tests?

  • Existing customer journeys working as expected I.e happy path testing.
  • Acceptance criteria are met for all features
  • Core features working as expected in different browsers.

2. What is the minimum we can test to achieve this?

We started by listing all the possible things that we could test:

  • Feature functionality (acceptance tests)
  • Happy path testing
  • Negative tests for journeys
  • Editing any default data
  • Emails being triggered
  • All copy being correct on all pages
  • Correct validation messages
  • Multi-browser tests
  • Correct copy in emails
  • Visual regression
  • Page Performance
  • Serving data to DB
  • Third party tools
  • Message queues

We use a continuous Integration (CI) pipeline for this project and automated tests are integrated into the pipeline which runs for every commit. We have decided to create two different suites, one to run for every commit and second is scheduled to run nightly.

The aim of these two suites should be slightly different. The nightly tests can be much more comprehensive, but the suite that runs every commit does not need to cover everything. Otherwise we risk false test failures regularly halting deployment. As a result, we moved the tests that most often cause false test failures to the nightly regression suite only, whilst the build suite will only test the important functionality.

Therefore, the build regression suite consists of happy path tests like completing registration journeys, sponsorship journeys with default form data and random form data, verify giving pages and corresponding emails received.

The nightly regression suite consists of build regression suite plus tests to verify the copy as the copy will be different for different journeys and changing the copy in one journey might affect the other one, field validation error messages and running the tests on multiple browsers.

3. How do we embed this into our development process?

It is essential to include the automation into the development process when we work in Agile projects, the first step to achieve this is to create Acceptance Criteria for every story that can be used as happy path tests for the automation. Create a subtask for stories wherever possible to create automated tests.

Keep staging environment as close as possible to production and effectively use feature toggles in staging environment to match production. If staging environment does not match production then we cannot be testing what we want to put live.

We have different codebase for different parts of the product like Registration journeys in React and Giving pages in legacy, so we though it will be a good idea to split the tests depending up on the codebase and run only corresponding regression suite for changes in the codebase. This means that we aren’t running unnecessary tests every time we try to release something.

4. How do we track our progress?

Considering our past experience with automation testing on Giving pages project where there were considerable amount of test failures due to third party systems like payment sandbox issues, false failures, we thought that it will be useful to extract some information from the test reports that will help us to analyse and improve our regression suites in future.

We use Concourse CI to run our regression tests which holds reports for the last 30 runs. It will be useful if we create a report (something like a simple xls) that can give us information about which pack failed, which service changes triggered this run, commit number, Date and time for start and end of the regression suite, which tests failed and tags of the failed tests.

Next steps

We are currently working on

  • Refactoring our existing regression tests to split them into different test suites based on codebase,
  • Creating Concourse CI jobs to run corresponding test suites based on commits to different codebase, and,
  • Investigation into tools to extract the report from the CI pipeline.

--

--