Using Android Screenshot Tests to Verify View Correctness

Emily Fujimoto
Thumbtack Engineering
5 min readDec 19, 2020
Photo by Muhammad Rosyid Izzulkhaq on Unsplash

It’s no secret that automated tests are an important way to maintain high-quality code and prevent later changes from breaking functionality. At Thumbtack we’re always looking for ways to encourage test writing by making the process easier and more robust. As an Android engineer, one big pain point was that for a while, our team didn’t have a good way to test that the UIs that we built looked correct. While we would do this manually when we built a feature, there was no guarantee that someone else, perhaps someone less familiar with the code, wouldn’t add something and miss testing an edge case. Our team first tried to solve this problem with Robolectric tests. Robolectric tests allowed us to make assertions on the UI as well as do some simple interactions with it. While this was better than no tests, when it came to verifying that the views looked correct, it had two main drawbacks.

The first was that it required a lot of boilerplate to write. Besides the setup involved to get the screen to render — typically just defining an appropriate view model — the developer then needed to add assertions about what they expected: whether a given view was visible or a specific string was visible, etc. It’s at least one assertion per UI element you want to verify, sometimes more, which is a lot when you get to more complex views.

An example of testing that certain views are visible with the correct text

The other problem with Robolectric tests was that the assertions didn’t allow you to verify that a view actually looked correct. You could assert that two TextViews were visible with the correct text, but you didn’t have a way to check how long text wrapped or that the two views weren’t overlapping. So when it came to testing how the UI looked, Robolectric tests required a lot of assertions and were limited in what you could actually verify.

To fill this gap in our testing landscape, we looked to screenshot tests. This is something that our iOS coworkers had been using for a while, and after reviewing our options, we chose to use a combination of Facebook’s Screenshot Tests for Android library and a custom Python script to present the image diff results. To write a screenshot test, first the developer does the setup required to get the screen to render, a step they were already doing for Robolectric tests. After that, they run a script that will render the view on a device and generate one image per test written. After confirming that the generated screenshots look as the developer expects, they are checked into the repository as reference images. Later when our suite of screenshot tests run, a new image will be generated for each test and compared against the original reference image. If there’s a difference between the two, the test fails and outputs a visual diff to make it easy to see what the discrepancy is. This process avoids needing to write a laundry list of assertions while also testing far more than said assertions ever did.

Example image diff for a failed test

Through our use of screenshot tests, we were able to catch layout problems when we upgraded our Constraint Layout library, changed how we do tinting, and removed our use of Calligraphy in one of our core libraries. These are issues that we could not have caught with our Robolectric tests. There are also times when our screenshot tests fail on changes that were intentional. While this doesn’t represent an actual bug, it does remind us that the layout changed thus drawing more attention to those changes. In these cases, it’s simply a matter of regenerating the test image and checking the new image into the repository.

There have also been a few other benefits to screenshot tests beyond just solving our original problem. The first is that adding screenshot tests and the associated reference images as part of the same commit as a newly created view gives the code reviewer a literally clear picture of what it is that is being built. These images convey different visual states more quickly and easily than looking at the XML preview and trying to cross reference view IDs with the code. If the reviewer is giving feedback on the code, it doesn’t make sense for them to form their idea of what you’re trying to accomplish from the code under review. It makes more sense to use the desired end state to inform their feedback. Plus the screenshots can then be shared with designers for additional comments.

A code review with screenshots added

Another benefit is that if it’s hard to trigger a given view — say if it’s hard to get the backend to return the appropriate data or there are a lot of steps in the flow before getting to the view you care about — generating a screenshot can be an easier way to double-check your progress. The tests can even be run right from Android Studio and viewed using the Device File Explorer to pull up where it has been saved on the emulator. It is often faster to hardcode the data needed and run the app than than use our script. However, going that route then lends itself easily to writing the screenshot test for its other benefits since you can just copy the hardcoded data into the test.

Viewing a screenshot generated in Android Studio

Screenshot tests have been a great way to verify that our UIs are correct. They have prevented regressions and filled a testing gap not covered by Robolectric. They’re also easy to integrate into our existing workflows. Stay tuned for a future blog post diving into the inner workings of our screenshot tests as well as future improvements we still hope to make. And if you’re passionate about Android or iOS development, we’d love for you to join us.

--

--