Clean UI Testing for Android

Aftab Ahmad
Wise Engineering
Published in
5 min readAug 28, 2019

--

Writing UI tests can be troublesome in Android generally because they can take a while to run, the CI environment is difficult to set up, and there is a struggle to maintain them throughout product and design updates. This can discourage developers to maintain the current suite and write any new tests.

There are four main goals to achieve for clean UI testing:

  1. Automate the QA role
  2. Create flexible and maintainable tests
  3. Create tests with a faster run time*
  4. Create tests which are reliable and reproducible

Automating the QA Role

Our main tool used to automate the QA role is Espresso. Espresso is a UI testing library which allows for performing user actions such as clicking, typing, etc, and performing matching and assertions on views. With Espresso the following can be written:

Another tool is Kakao, Kakao is a DSL for Espresso written in Kotlin. It makes it easier to write view actions and assertions as it provides the built-in IDE auto completion which is more precise than Espresso. The same code can be written in Kakao as:

Both tools are very similar however, Kakao asserts its power with the RecyclerView actions and assertions. In Espresso, custom matchers would have to be more tailor made for RecyclerViews to achieve the same result. Here are a couple examples of RecyclerView actions and assertions in Kakao:

Both Espresso and Kakao are good starting points for automating the QA role, however one major component is missing — decoupling the UI and the test.

Take the example of the login screen again, if the UI changes in the future, the QA person will be able to reinterpret the new UI and perform the same test again (given some credentials, I should log in and see a success screen). For the QA person, they’re able to separate between what is being tested (the flow) and how it is being tested (physically opening the app, seeing the screen, and typing/clicking views). If we look at our test though, we’ve combined the what and how together. If the what changes, we need to modify the how parts of our test.

This brings us to the next point, how can we write tests in a flexible and maintainable way, such that if designs change, we don’t need to have a major rewrite of our tests?

Achieving Flexibility and Maintainability

There is a great presentation by Jake Wharton about this topic and his solution, Robots. Similar to how architecture is added to apps to create clean separations between layers which allows for testing of each layer irrespective of the other, the same can be done for UI testing.

The goal is to separate the what from the how. The what is the flow, for example, given these login credentials the user should see the success screen. The how is essentially how the test is being performed. Are things being clicked on, typed, spoken, etc. These parts should be abstracted away from the tests as they’re more tied to the UI which can change.

The how of the test can be written inside the robot class and within the test itself (the what), can be expressed by applying the robot class. Here’s an example of how to rewrite the login test case using Robots:

In this class, each method would be filled out with how the test is performed.

In this class, the robot is applied and provides the flow of the test case, given some login credentials, the user should see the success screen.

From the test case above, there is no indication of how the test is being performed (are things clicked, typed, spoken, etc), or what the views look like. This is true to some extent, we know that there is a username and password required for this screen, but we don’t know how that information is modelled in the UI, ie, they could be input fields, a list of accounts, etc.

The UI tests are now at the same state as the QA person, if the UI changes the robot class can be modified, and the tests would apply the same flow. An added benefit is, that the robot class makes it easy to write additional tests quickly. With just Espresso, code gets copied between tests and ID’s, matchers, and assertions get changed, but the robot class provides more reusability.

Achieving Speed

Getting UI tests to run quickly can be tough, since more code has to compile, the emulator has to boot up, and then the tests run. There are certain tips to use to try and speed up tests.

Headless emulators are a good option when running UI tests on CI. By turning off the user interface in the Android Emulator, you can access a new emulator-headless mode. This new mode runs tests in the background and uses less memory. It also takes about 100MB less, mainly because the Qt libraries which are used for the user interface are not loaded. This is available since Android Emulator 28.1.

Using Android test size annotations are another way to help with speed. Now just adding the annotations won’t do anything on their own, however by applying @SmallTest, @MediumTest, and @LargeTest properly, this provides flexibility to only execute small and medium tests on PR builds and run large tests nightly. To run specific tests:

adb shell am instrument -w -e size [small|medium|large] com.android.foo/androidx.test.runner.AndroidJUnitRunner

The final tip for improving speed is using shards. Instead of running 50 UI tests on one emulator 25 can be divided up between two. To use sharding:

adb shell am instrument -w -e numShards 2 -e shardIndex 1 com.android.foo/androidx.test.runner.AndroidJUnitRunner

Reliability and Reproducibility

There is nothing more frustrating than having UI tests fail randomly and having it pass on a rebuild/rerun. Without having reliable and reproducible UI tests, it’s quite discouraging to continue to write and maintain them.

One of the biggest tools that can help is to use Android Test Orchestrator. This provides two main benefits:

  1. Minimal shared state. Each test runs in its own Instrumentation instance. Therefore, if tests share app state, most of that shared state is removed from the device's CPU or memory after each test. This can be taken one step further and also remove all shared state from the device’s CPU and memory after each test, by using the clearPackageData flag.
  2. Crashes are isolated. Even if one test crashes, it will take down only its own instance of Instrumentation, so the other tests in the suite will still run.

To start using the Test Orchestrator, include the following in the build.gradle file:

Run it using:

./gradlew connectedCheck

Another pain point in getting reproducible tests is with animations. Having tests stall or wait for animations/transitions to finish can cause timeouts or fail assertions. They can be disabled using the following:

adb shell settings put global window_animation_scale 0
adb shell settings put global transition_animation_scale 0
adb shell settings put global animator_duration_scale 0

Hopefully these points help to provide some structure to achieving the initial four goals. Please comment if there are any other tips/suggestions or improvements.

P.S. Interested to join us? We’re hiring. Check out our open Engineering roles.

--

--