Photo by Jeremy Bishop on Unsplash

Automating Android Jetpack Compose using Appium

Raj Varma
Raj Varma
Nov 16 · 6 min read

The Android ecosystem has seen many new additions over the years. Usually, these changes affect app developers more than functional test automation does. However, the amazing new Jetpack Compose framework is going to significantly affect your end-to-end (E2E) tests, especially if you are using a high-level tool like Appium.

In this blog, I tell you various ways of fixing your tests when using Appium to automate them.

For those of you in hurry, you can skip forward to the section Using Appium to automate Compose.

But first, what is Jetpack Compose?

Jetpack Compose is the modern toolkit for building native Android UI. Traditionally, Android UI has been created using XML layouts. But with Jetpack Compose XMLs are a thing of the past. With Jetpack Compose, UI is defined as composable functions written in Kotlin. It is concise, declarative, and much more developer-friendly than XMLs. Given that it has so many advantages, it may well become the first-class framework for writing Android UI.

Here at Bumble — the parent company of Bumble and Badoo, two of the world’s highest-grossing dating apps with millions of users worldwide — we have already started adopting Jetpack Compose on some of our screens. However, early on we quickly saw that our E2E tests written in Appium were not happy.

Why so?

Under the hood, Appium, uses native testing technologies such as UiAutomator2 or Espresso to drive the app. We use Appium with the espresso-driver which relies heavily on the native view (android.view.View) classes and their attributes to locate elements on the screen. Screens developed using Jetpack Compose have semantics nodes instead of Android View type objects. Because Espresso doesn’t understand these semantics nodes and hence, it can’t find them. It follows, therefore, that Appium can’t find them either.

Is, there any alternative in Appium?

Yes, there is— UiAutomator2. The UiAutomator2 testing framework relies on accessibility services. The Semantics tree corresponding to the Jetpack Compose’s composition is understandable for Accessibility services. Therefore, UiAutomator2 can see the semantics tree and find the elements e.g: by using accessibility identifiers (or content description). Appium has a driver built on top of the UiAutomator — appium-uiautomator2-driver. There is also some basic support of UiAutomator APIs in Appium-Espresso-driver and we started using it as a stop-gap.

But, why a stop-gap? Why not use the UiAutomator driver?

Since Compose nodes lack any Id property to uniquely identify the element, the second-best option for detecting elements that utilise UiAutomator is to use content description. If content description is heavily used for testability, we risk losing compromising accessibility. To enhance testability, Jetpack Compose provides a semantic property called testTag. This property can be assigned to a node and later used to uniquely identify it during testing. Unfortunately, this property is not recognised by UiAutomator2. Therefore, while we can use UiAutomator2 to drive Jetpack Compose UI using, the automation support it provides isn’t exactly first-class.

So, given the limitations of UiAutomator and Espresso, being able to provide satisfactory automation support for Jetpack Compose screens in our E2E tests became a problem for us.

What is the right way to automate Jetpack Compose?

Fortunately, Compose Framework provides its own set of testing APIs to find elements, verify their attributes and perform user actions. The official testing docs provide good insights on how to unit test the Composables. This support is provided using the following libraries:

The AndroidComposeTestRule which is included in the above libraries can be used to access the semantics tree and nodes of the current Compose screen. While this may be good for unit testing, what about our E2E tests using Appium?

I am an Appium contributor as well as one of the maintainers of Appium’s espresso-driver. To keep E2E tests happy, I decided to add Compose support inside Appium. Since the above libraries are based on Android instrumentation, we needed to instrument our app to use AndroidComposeTestRule. As Appium’s espresso-driver already handles application instrumentation and therefore it was best to reuse it. Therefore, we decided to provide Compose automation support inside Appium’s espresso-driver. Our amazing Android developers at Bumble, along with myself, conducted a few experiments and came up with a proof of concept. After raising a series of Pull requests in Appium, at last, we had first-class automation support for driving Compose views directly inside the Appium!

Using Appium to automate Compose

The prerequisite for all of this is that you have to be using Appium’s espresso driver. To do this, set automationName capability to espresso.

Now, you can automate your normal views the usual way. As soon as you land on a screen built using Compose, you have to toggle to compose automation mode. To do this, use Appium’s newly implemented Settings API. In Java, you can do it like this:

This will change the context of the driver so that instead of using Espresso to find elements, the AndroidComposeTestRule will be used. All the Appium commands will work as they normally do.

One thing to note here is that once you land on a screen that has Compose views, you might see that views fail to render. This is because Compose tests need to be synchronised with an internal virtual Compose clock. By default, this synchronisation is disabled. Changing the driver setting to compose using the above-mentioned command will enable this and Compose views should then render as usual.

To switch back to normal Android views (Non-Compose), just invoke:

If your context is compose, then the page source from Appium will give you an unmerged semantic tree of Compose nodes. All the find_element queries by XPATH are made on this tree.

As of now, support for all major routes has been implemented. We can find elements, get attributes and take action on them. However, there are still a few things remaining to be implemented. One of the features yet to add is Gestures using Actions API. This will soon be in the pipeline and any contributions from the community are most welcome. Also, as this is quite new, there might be some issues in setting it up. One of the issues that we encountered, for example, was the conflict of dependencies between App under test and the espresso driver. I fixed these by simply providing espressoBuildConfig capability as below:

Make sure you replace the <version> with the exact version in your application under test. You can read more about this capability here.

The Complete Example

This example is in ruby but can be applied to any other programming language of your choice.

You can clone the complete project from this Github repo. Also, the source code for compose_playground APK can be found here.

The best part of the above solution is that our test code never changes and, irrespective of the kind of screen shown, it used the same interface. Just switch driver context based on the type of screen presented and it’s all sorted! Do try this on your Compose apps and if you encounter any issues, please post them in the comments section and I’ll be happy to help.

Bumble Tech

This is the Bumble tech team blog focused on technology and…