Automated Visual Testing With Snapshots: Part 1

Atakan Karslı
Trendyol Tech
Published in
5 min readNov 25, 2021

Traditional UI testing frameworks work by finding an app’s UI elements with queries, creating events and sending them to those elements with their accessibilities, and provides APIs enable us to examine the UI element’s properties and state to compare them against the expected state.

Even we call them ‘User Interface’ tests, actually they don’t need any Graphical User Interface. They can work without any GUI which is called headless mode since they can interact with the app through proxy APIs.

By default, they are not designed to be able to detect visual changes like :

  1. Misaligned or overlapping images, texts, or buttons.
  2. Partially visible elements.
  3. Color differences
Image comparison that can’t be detected by UI tests

Since it is not possible to update mobile applications instantly, visual errors are much more critical and can live longer. Previously, the manual test was the only way to get insight into visual UI changes, but it’s time-consuming and error prone. At this point, visual testing seems to be the answer to such problems.

As Trendyol iOS Team, we always try to find and integrate tools to support manual processes. The snapshot testing library was one of them because it’s an effective option since it’s both fast and easy to create and maintain.

How does the Snapshot Testing library work?

When you execute tests and an assertion first runs, a reference snapshot is automatically recorded. Repeated test runs compare differences to runtime values. If they don’t match, the test will fail and show the difference.

The example implementation of SnapshotTesting

It provides assertSnapshot(matching:as:) which compares two snapshots with the given type. Besides images, it can work with any format.

Before going into details about how we use it, I should mention the biggest prerequisite is mock data to avoid issues with dynamic content. The reason why we postponed using snapshot tests for a while because it was impossible without mock data since taken snapshots will be different each time.

How do we use snapshots with UI Tests?

Generally, snapshot tests are used with XCTest in a structure similar to unit tests. In addition to having many benefits, the views of a complex application such as Trendyol cannot be used as easily as in the example. We plan to extend our unit-style snapshot tests with numerous examples also and we may write a different article about it in the future.

In this article series, We will only describe the structure we use in UI tests. In the second part, Beyza Ince will explain how we use these tests in a pipeline that informs teams on a daily basis where we can update tests with just a click of a button from slack.

We created a helper class for operations like; wait until asynchronous operations to finish, cutting out the image, and painting it. I’ll explain each of them separately.

croppedScreen:
Since we use XCUITest, we need to take the screenshot of the full screen to assert but it causes the problem that we will have conflicts in the status bar (battery, time, etc.).

We solved this problem the easiest way by cutting off a few pixels that are used for the status bar, before taking a screenshot. Status bar size is not standard for all iPhones so you have to cut it according to the device you are running.

fill:
If you have dynamic elements like sliders you may want to paint their part after taking screenshots. Due to our experiences, we decided to manipulate mock data to make them not slide instead of this approach since it's more stable and comprehensive.

prepareScreen:
Simply wait until the screen stabilizes and call croppedScreen method to get a proper screenshot. We prefer to use the sleep method even we know how to create smart waits because this was the most stable method and stability is much more important than it took as we planned to run them once a day.

Snapshot Test Example with XCUITest

In order to ensure that they are not failing with the same problems, we keep the connection of these tests with the UI tests as minimal as possible. To achieve it, we decided to cover the pages that have deeplinks which are about 100 cases.

The code block above is actually 2 of these tests:

  1. Mock data should be enabled
  2. When the login parameter is set to true the app will act as the user is logged in.
  3. openDeeplink method simply launches safari and executes the given deep link to navigate the user to it.
  4. And assertSnapshot does its magic.

If it detects a visual change the test report looks like this:

What did we learn?

Like any other tool, these tests required some care and iteration. The success of this structure is directly dependent on the regular updating of the mock data.
Despite its challenges, making visual comparisons means having visuals in reports and it makes really easy to understand where the problem is.

Precision:

The precision value was 1 by default and it was causing false positives due to anti-aliasing and other factors. To avoid them we decrease it to 0.98.

SwizzleImage:

Even we run them with mock data, the image URLs in the JSONs were image URLs that were copied from the prod environment at first. The load time of these images can take longer and they may not be loaded successfully due to timeouts.

This was causing some random test failures. To solve this problem, we swizzled the image URLs of failed tests and uploaded an image found in the project.

Conclusion:
When it comes to testing, writing tests and adding new tools is usually the easy part. The hard part is that these tests and tools can live long enough to actually contribute to processes.

In the second part, Beyza Ince will explain how we integrate these tests in our CI/CD pipelines.

--

--

Atakan Karslı
Trendyol Tech

Senior Developer In Test @Trendyol | Curator @Testep