Top 5 Junior Automation Tester’s Questions About Automated Testing

Olya Kabanova
hh.ru
Published in
8 min readJul 19, 2022

Hey everyone! I’m Olya, mobile app tester at hh.ru. Today we’re finishing the series of answers to the most popular questions on testing automation. We have already answered the questions from a manual tester, a manager, and a chief technology officer. Now it’s time to answer the five most interesting questions that beginner test automation engineers ask — about flaky tests and production bugs, our fight for stability and how to live up to the expectations regarding autotests.

The video version of my Q&A is available here.

Question #1. We write and run autotests, but production bugs are still haunting us

Let’s start with the idea that production bugs will keep occurring anyway, it’s absolutely normal.

Users may perform unbelievable operations with your app and go through fascinating and unpredictable scenarios: open a tab, switch to another app, block the phone, receive push notifications, surf the feed, and then come back to our app. That’s where the bug is waiting. You can partially automatize such cases — with the help of overload, interruptions and other operations. However, you shouldn’t cover all possible unique scenarios with autotests. At least because no CI will ever survive such a load, and the time wasted on test running is not worth it.

Apart from that, beware of the device-dependent bugs, which you can’t catch with autotests, if you were put to test this very troubled machine. Of course, you can try to configure CI so that it creates various emulators, or run tests on various real devices, but it will significantly complicate the process of supporting such autotests and most likely reduce the stability. And again, it won’t be worth the time wasted on support and runnings, especially due to the fact that the bugs you find will be minor.

Question #2. How to make autotests less painful for everybody?

In order for autotests not to make everybody suffer all the time, it’s enough to follow several rules:

Rule 1. Write a clear and coherent code, set the reports.

The tests should be comprehensible not only for you, but for other testers and developers. If your tests failed, the person in charge shouldn’t sit with it half a day or pester the author crying “help me identify the crash”. They should be able to understand the location and reason of it. It’s even better if you arrange transparent reports, where it’s possible to trace the steps and the exact spot of test failure.

Nobody wants to waste time on checking how the test works and what it does in general — all of this should be clear.

Rule 2. A review is required.

In order to make sure that your test is actually helpful and perfect, somebody else needs to confirm it. Only then you can sleep well at night.

Here at hh.ru we require a necessary review for all tasks. For all tasks with autotests it’s obligatory to have a review from QA. It’s extremely important, as the reviewer helps find not only an incorrect indent, but also mistakes in logics. They can also conduct additional testings or identify an error that you might have missed.

Rule 3. Implement autotests in your CI/CD right away.

It’s essential that you benefit from autotests here and now. Don’t put them on the back burner. The sooner your people start to get used to the fact that their code is now tested, the easier it will be for you to live. Meanwhile, it’s important to remember that autotest releases should be as stable as possible. Before merging your tests, it’s better to run them on CI 100 times (I’m not joking). Only after becoming 100% sure that your test doesn’t flake, doesn’t crash at random spots, and works properly — only then move forward!

Question #3. Why do tests constantly crash? How can I trust them?

Let’s start with the fact that such autotests will be crashing anyway. It’s absolutely normal, so if they crash all the time, it’s worth pondering whether they actually check anything.

Our duty is to make them fail less. In order to do so, we need to learn to detect and solve the problems which cause the crash, plus, improve the weak points.

While working with autotests, I detected my personal TOP-4 problems with autotests:

  1. Somebody released something and everything crashed.

Here everything is simple. The functional changes should be made simultaneously with autotest correction. Such corrections take some time, which we consider during task evaluation and decomposition. When a developer changes somethings in the functions that had already been covered with autotests, before merging the branch, he needs to edit autotests, so that they keep checking these functions. It’s essential to stabilize this process and not to ignore the tests. Always keep them relevant.

2. Tests started crashing, although we hadn’t changed anything

It’s even more exciting. The developer hadn’t changed anything, but for some reason the tests started to fail. It could be that they changed something on backend.

Why do our UI-autotests depend on backends?

Our autotests are run on test stands — it’s a copy of production backend (without prod database). We update their relevance — every day autotests are run on relevant backends. Therefore, if somebody releases anything new, our autotests correspond to that and may crash. It’s an emergency for us, so we instantly check if it’s a bug or our new reality, which should be updated. Then we contact the team or correct the tests.

It’s important to mention that it doesn’t happen with every backend change. Most of the time we synchronize our teams. However, the reality dictates different rules: sometimes we omit tiny details (e.g. somebody didn’t consider affecting mobile apps). Our tests are very sensitive to this.

Every time it happens, we think of mocks. Mocks are good, if you don’t use them too much. But I think that such autotest sensitivity to changes — it’s ok, because this way we’re in the loop of new features, which affect our app from other teams’ perspective. Plus, mocks require time for support and updating. Here it’s up to each person what they want to support more.

3. Bugs

My favourite reason of autotest crashes. No comments here. If your autotest found a bug, it’s a very good autotest. And you’ve done a great job.

4. Flaking

My least favourite reason. That’s what we’re going to discuss next.

Question #4. Why do my tests flake?

Flaking is like emotional swings — your tests are green, then red, everybody is sad, then happy, and nobody knows how to live with it. In order to make your app’s and test’s relationship healthy, we suggest that you follow several rules about how to avoid flakes and create more stable autotests:

1. Unique test data

Each autotest should be accompanied with new unique test data, which only this test works with. You can’t just hope that something would pop up on the screen and that’s ok. Predictability is key.

2. Wait till all elements appear on the screen

You made a logical test, step-by-step, but it still crashes at random spots. It must be that it doesn’t wait till the necessary element and taps in the wrong place. Animations, loaders, alerts only worsen the situation.

To prevent that from happening, before any action (tapping, swiping) or assert (checking) you need to wait until this element is shown on the screen and becomes available.

Kaspresso for Android has an embedded mechanism, which waits for the necessary element, (meanwhile doing retries on isVisible), and only then it commits the action. Thanks to that, your autotests will have no problems with wrong or early tapping. Also you can set the timer for this waiter, because apparently waiting for an element during 10 seconds may be a bug too.

XCUITest on iOS has waiters too, but you have to set them by yourself before any each action. Or it’s worth to write a mechanism similar to Kaspresso.

The main thing is not to leave waits in code for a particular amount of seconds, because it might prolong the test’s time. And make it a rule: before messing with the element, make sure that it’s present on screen.

3. Configure stable infrastructure

If the test infrastructure keeps bringing you down, it’s needed to solve this problem ASAP. Freezing and slow emulators, crashing test stands — this atmosphere is not beneficial for your tests. Plus, everyone will soon get fed up with investigating the reasons of another fall. That’s why it’s necessary to dedicate the team’s time and resources to infrastructure stabilization. If everything is optimized, but there’re still problems, maybe you need to update your hardware. Time to empty the budget.

Question 5. What should I do, if there are too many tests now?

At first you coded small autotests — open a screen on some app and check whether it launches or not. Then you start automatizing more complicated scenarios. The number of tests rises, the test time increases. When something breaks down, it’s not one test that crashes, but ten, all with the same mistake, whereas the time of the crashed build increases, because the failed tests were retried.

In this situation it’s important to let go of your old yet stable tests and delete them, if the same checkupsappeared in the new ones. The hardware is not immortal, plus spending double time on app launch for each new test is antiproductive and expensive.

In order to avoid diving into huge future refactoring, use the following rule — update the old things before starting to write the new ones. If you start applying this principle right away, you won’t have to deal with such a problem.

Instead of conclusion:

This article is the final article from the Q&A series on automation, but we will continue posting about testing and automation at hh.ru, sharing how our releases go and a lot of other helpful and interesting things.

Follow our Telegram news channel and Youtube channel with “HHella cool stories” to keep up with the new videos, articles and other news. You can also ask any questions to our engineers in the comments or contact the developers in our Telegram chat or privately.

--

--