How to keep a tester’s nerves or speed up regression testing from 8 to 2 hours

Yuliya Kolesnikova
Life at Vivid
Published in
9 min readAug 3, 2021

Hi! My name is Yulia and I am Mobile QA at Vivid Money.

I’ve been in testing for a long time — I’ve seen so many interesting things. 😂 But as practice shows, everyone has the same problems and concerns. The only difference is in the analysis, approaches and implementation of solutions.

In this article I will tell you HOW TO MAKE A TESTER’S LIFE EASIER DURING REGRESSION TESTING!

I’ll tell you about:
1. Our processes
2. The main problem
3. Analysis
4. Methods of solution, with the results obtained

A little about our processes

The release of applications occurs once a week.
One day is planned for regression testing, the second for smoke testing. The rest of the time is spent on developing new features, fixing defects, writing and updating documentation, improving processes.

Regression testing is a set of tests aimed at detecting defects in areas of an application that have already been tested but are affected by changes.

Almost all positive test scenarios are covered by test cases that are conducted in Allure TestOps.

Each platform (iOS, Android) has its own documentation and autotests, but everything is stored in one place. Any QA on the team can view and edit them. If new cases are added, they will definitely go through a review.
An Android tester reviews iOS and vice versa. This is actual for manual tests.

About the test plan for regression:

To conduct regression testing, a test plan is drawn up with manual test cases and autotests, separately for Android and iOS. The tester builds a launch (test plan launch), in which he specifies the release build version and platform. After creating a launch, autotests are started with the selected cases, and the person responsible for manual testing assigns manual test cases to himself. Each passable case is marked with a status: Passed, Failed or Skipped. During the check, the results are displayed immediately.

At the end of the check, the launcher closes. And based on the results, a decision is made about the readiness for release. Everything seems cool and logical, but of course there are problems that make testers sad 🙂

Let’s define the problem:

Increasing the volume of tested functionality during regression testing, and going beyond the time frame.
Or — more and more test cases, and we only have 8 hours maximum!
Previously, all cases were included in the test plan. And with the addition of new functionality, the test plan increased to 300 tests and its passing began to take more time than it was planned. We stopped keeping within the hours of the working day. In this regard, it was decided to revise the approach to testing, taking into account the time frame and the possibility of maintaining quality.

Analysis and solution

Manual testing is overloaded due to the fact that with each new feature test cases are added, they can be both simple and complex (consisting of transitions between screens). I also had to test interaction with the backend. We spent a lot of time on such checks, especially when bugs appeared and we had to figure out which side of the problem was.

Having described the weaknesses, we decided to refine the approach to automation, and also used impact analysis to highlight solution methods.

Impact Analysis is a study that allows you to identify the affected places in a project when developing new or changing old functionality.

What have we decided to change in order to offload manual testing and reduce regression testing:

  1. An increase in the number of autotests and the development of a unified scenario for transferring test cases to automation
  2. Separation of the tested functionality into front-end and back-end
  3. Changing the approach to the formation of a test plan for regression and smoke testing
  4. Connecting automatic analysis of changes included in the release assembly

Below I will talk about each point in more detail and what results were obtained after the introduction.

Increasing the number of autotests

Often, when we want to reduce the time for regression in the testing process, we start with automation. But in our command, all the stages took place in parallel. And naturally, some of the tests were transferred into automation. More details on how the automation process is built in our company will be described in another article.

To make the process the same for both platforms, an instruction was written. It outlines the translation criteria, steps, and tools.

I will briefly describe how the transfer of test cases to automation takes place:

  1. We determine what types of checks can be automated. This is done by a manual tester on their own, or by discussing with the team at a meeting.
  2. Test cases are being finalized in Allure TestOps, for example, more descriptions or json are added.
  3. The corresponding test cases are transferred to the need to automate status (also in Allure TestOps)
  4. A task is created in Youtrack. It describes what needs to be automated. Links to test cases from Allure TestOps are attached. And a responsible AQA is appointed.
  5. Then, tasks from Youtrack are taken to work based on priorities. After the changes are poured into the necessary branches and have been reviewed, the tasks are closed, and the test cases in Allure are transferred to Automated with the Active status. The code of autotests is reviewed by the developers.

Often this happens a few days before the next release, and by the day of the regression, some of the test cases already are automated.

Results:

  • Reducing the burden of manual testing.
  • A clear and simple mechanism for transferring to automation. Everyone is busy — no downtime.
  • More functionality is covered with autotests that are chased every day. Bugs are found earlier.

Backend and frontend separately

Test automation is separate for backend and frontend.
But there are E2E tests that test interoperability.

E2E (end-to-end) or end-to-end testing is when the entire system is tested from start to finish. Includes ensuring that all integrated parts of the application function and work together as expected.

Many end-to-end autotests were run from the side of mobile testing, it was necessary to write complex test cases. Often they did not pass due to problems with services or on the backend.

Having worked in this format, we decided that it takes a lot of time to fix autotests. And then E2E tests have to be passed manually.

It was decided to clearly divide the functionality into modules with the allocation of logic on the frontend and backend. Leave a minimum number of E2E tests for manual testing. Simplify and automate the rest of the scripts. And so on the backend, we check the business logic, and on the client, the correct display of data from the backend and UI elements.

This allowed us to identify the areas with the greatest criticality, reduce the time for manual testing, and make the autotests more stable.

For clarity, here is a plate:

Results:

After splitting:

  • It became easier to localize the problem
  • Problems are identified earlier and, accordingly, are resolved faster
  • There is a clear delineation of areas of responsibility. No unnecessary checks on the client.
  • Autotests have become much more stable, because not tied to services or mocks that can fall off at any time. (And this any moment is usually the most inappropriate)
  • The time for the implementation of autotests has been reduced, there is no need to add json to the test cases additionally when writing

Filtered test cases in the regression test plan

Formation of regression testing plan based on the blocks in which changes were made. As well as the choice of the main permanent test scenarios.

In order to make it easier to form a plan, we began to use tags.

Example: Regress_Deeplink, Regress_Profile, Regress_CommonMobile

Now, all test cases are divided into blocks, which are marked with a certain tag! There are also mandatory cases that are included in each regression testing plan and separate test cases for smoke testing in production.

This allows us to quickly filter and quickly form a specific plan in accordance with the changes made, and not waste time checking what was not affected.

Results:

The introduction of additional analysis in the formation of test plans, helped to reduce the total time for passing the regression testing to only 2 hours from the original 8.

We have several testing plans — full and light. Usually, we pass light and it consists of 98 cases (autotests + manual). As you can see in the screenshot, the full regression plan consists of 297 test cases!

The average time for Regress iOS light to pass is about 2 hours, but when the changes were only in a couple of modules, then you can regress in an hour. This is a big plus, because there is still a margin for bug fixes (if you need to fix something urgently). Also, in the future, it is always possible to look at the reports in which assembly what was checked.

We developed a script with analysis of changes and notification via Slack

The quality of the product depends entirely on all team members. Therefore, in order to understand exactly which module was affected, we turned to the developers with a proposal to inform us about what changes were made to the released version.

At first, we had to remind, clarify, and indicated the affected blocks in the tasks. On the one hand, we were able to ease regression by selecting only the cases we needed. But on the other hand, a lot of time was spent on communication and constant clarifications. Clarity was lost, and there was no complete certainty that everything needed was being checked.

Logically, the following solution arose — to make this process automatic!

A script has been created that collects information on commits. And then, having generated a report on which modules were affected, it sends the necessary information to a special Slack channel.

The script works simply:

  • After each build, gets changes between the previous version of the application and the commits from which the build was assembled
  • Gets a list of files that reflect changes in some screen
  • Groups these changes by features and teams to make life easier for testers
  • We send a message to a special slack channel with all the information on changes

Results:

What advantages did we get by connecting build analytics:

  • Reduced developers’ time for manual analysis of the changes made
  • Reduced the likelihood of overlooking and under-checking the required functionality
  • Simplified communication on this issue

Naturally, it took time to write the script and integrate it into Slack. But, in the future, it became easier for everyone to track the above process.

Briefly about the main

  1. The use of tags in test cases and in the formation of test plans has reduced the volume of the test plan, and accordingly the time for testing.
  2. Development and use of a script for notification of changes made it possible to clearly understand which modules were affected during the development of tasks for the release. Or when fixing bugs. Also, testers stopped distracting developers with such questions.
  3. Automation covered about 46% of test cases, which greatly facilitated manual testing. In addition, there is time left for updating cases and writing new ones.
  4. Separation of testing into backend and frontend helped to determine in advance the localization of problems and timely fix.

--

--

Yuliya Kolesnikova
Life at Vivid

QA Engineer at Vivid Money. I love skiing, cars and sweets!