Top 5 Manager’s Questions About Automated Testing

Olya Kabanova
hh.ru
Published in
7 min readMay 17, 2022

Hey everyone! I’m Olya, mobile app tester at hh.ru. Previously we’ve already released an article with answers to manual tester’s questions about autotests. Let’s continue the series of answers: in this article, we will answer the 5 most popular manager’s questions about autotests. We will talk about how much time and resources will be spent on automation and how soon it will start yielding results.

You can watch the video version here.

Question #1. Do autotests slow down development?

First of all, autotests greatly reduce the time for testing and regressions, which means they help to get features into releases faster.

But, indeed, they sometimes increase development time. For example, when a developer changes the functionality covered by the autotests, he needs to fix the falling autotests before merging new code.

To avoid additional work and unpredictable problems, you should set aside your time for the possible autotests fixes when assessing and decomposing your task. Ideally, QA engineers should be present at decomposition meetings and be able to tell developers which autotests may be affected by the new functionality. Then the features development time will be predictable and will not increase occasionally.

But you need to have a stable autotests infrastructure and understand autotests crash reasons on your project to work process properly. This way developers won’t have to waste their valuable time figuring out why autotests are crashing — it was its code or a problem on the test server.

Question #2. Can I write one single autotest for both platforms at once?

Of course, you can. There are cross-platform frameworks, and many people use them. But we chose native ones right away, as they are more stable, reliable, and easy to maintain.

What are the main problems with cross-platform frameworks?

  1. The cross-platform framework codebase lives separately from your application. If you need to change or customize framework code, you have to go to framework developers, make pull requests and wait for them to merge and release it on their side, which is not always convenient or productive.
  2. Frequent problems with new OS releases and bugs. You have to wait for support from the framework developers, and the time of this waiting is very unpredictable. You don’t know when everything will be fixed and optimized and when it will finally work for you. So you’re dependent on other people.
  3. Your chosen cross-platform framework may be written in a programming language that your developers are not familiar with. It may be more difficult to get help from them if you need it. Also, the developers will not be able to write and edit the autotests themselves.

It might seem faster and easier to write one cross-platform autotest than two native ones. But in fact, this is not the case.

Firstly, all the “saved” time may later be spent supporting such an autotest for two platforms. This can be a problem due to the updatable specifics of the different operating systems. Also, often the behavior of the same feature or button can be different for iOS and Android.

Secondly, when choosing between a cross-platform framework and a native one, the main argument is that “learning one programming language is easier than two”, don’t be in a hurry to decide. The syntax of Kotlin and Swift are very similar (at least when it comes to writing autotests). If you write an autotest in one of the languages, you can easily write the same one for the second platform.

Question #3. When will autotests start to bring more benefits than problems?

Exactly when:

  • a stable infrastructure will be set up for your project,
  • autotests will be built into your CI/CD pipeline and not run locally by someone else when they are reminded of autotests' existence,
  • autotests will fail an acceptable number of times, i.e. they will not be flaky.

Even if there are very few tests, but they are already running at least before the developer branch merging, that’s already a benefit: minus some manual checks and a guarantee that your application at least launches.

Question #4. In how many months is it possible to reach 100% coverage?

The question of 100% coverage is a very slippery one. I don’t want to get into useless arguments about whether it even exists and whether it should be pursued. But since it is asked very frequently, let’s get to the bottom of it.

Firstly, it is worth considering the coverage of each feature separately.

Secondly, overall the answer to the question “how much of this feature you have covered” will be based on the QA’s experience and knowledge of the product.

In assessing the coverage of the feature, my reasoning is as follows:

  • all positive checks are 60–70% of the overall coverage,
  • after that add negative checks (all sorts of interruptions, screen roll-ups, zero screens, etc.) — we get 90%.
  • The remaining 10% are hard-to-reproduce scenarios, dependant on other features, etc., which are either not worth covering at all (think about whether your CI can handle such a load, or whether the time to support such autotests is worth it), or leave it as the very last thing.

I would also add that not all features need to be automated in principle. Sometimes the effort involved in automating some incredibly complex check greatly exaggerates the ability to quickly check it by hand once before a release.

In order not to leave the manager’s question unanswered, we identify the most critical functionality of the application and look at how many positive and negative tests we need to automate first, to be sure the functionality works. Next, we look at how many testers are willing to spend what amount of time on automation and calculate how long it will take us to automate it.

Question #5. How many QA engineers does it take to automate the whole project?

I’ll tell you a story about mobile app automation at hh.ru.

At different periods of time, we had between 2 and 4 QA engineers for the whole mobile area. In about a year we came to almost full coverage of regressions on both platforms. And that’s taking into account the fact that we continued to test the functionality manually and do manual regressions, and on Android, we moved to a new framework twice. In the end, by the way, we settled on Kaspresso, with which the tests became much more stable and easier to write. Each move was a big amount of refactoring because before that we had already managed to write a significant number of autotests. But nevertheless, in one year the regressions were automated.

In other words, any number of testers can automate a project if enough time is given to it.

But it begs the question: how does a tester have time to do both manual testing and autotest writing?

Sometimes it is really difficult. Especially if there is a big flow of new functionality that urgently needs to be tested. Everyone wants to be on time for the release, and tasks are given a higher priority. This is a familiar situation. What to do?

  1. We sit down.
  2. We realize that autotests will be of maximum benefit to us in the near future, so, for example, we allocate one day a week for automation to each tester.

We do this when testers are stuck with manual testing, and it works. For automation, you can choose the day after release, when nothing is burning anymore, and the tester can write autotests with a clear conscience, without being distracted by other tasks.

Instead of a conclusion:

This article is the second in a series. The next ones that await us are:

  • Top 5 CTO’s questions about autotests
  • Top 5 Junior Automation Engineer’s questions about autotests

Sounds fun? Subscribe to our news channel on telegram and “HHella cool stories” channel so you won’t miss new videos, articles, and other news. And you can ask our engineers any questions on any topic in the comments or in hh developers’ telegram-chat.

Stay tuned!

--

--