Selective Unit Testing on iOS: Achieve %80 Faster Feedback

Atakan Karslı
Trendyol Tech
Published in
6 min readMay 29, 2024
Photo by Marc Sendra Martorell on Unsplash

As the iOS Platform team, we have faced countless challenges in the past few years, from scaling our app’s infrastructure to ensuring our development and testing processes keep pace with our rapid growth. We’ve implemented innovative solutions to simplify these processes, allowing us to maintain high-quality standards while adapting an ever-expanding codebase and team.

Testing is a critical focus area for us. With an extensive suite of almost 33k tests, including 30,000 unit tests and 3,000 UI tests (containing smoke, snapshot, and regression tests) across more than 250 modules, ensuring the efficiency of our testing process is crucial.

In this article, we share how implementing selective testing for unit tests in our Trendyol iOS projects has significantly accelerated our feedback loop. I’ll also introduce you to the open-source tool xctestplanner, So you can implement everything described here in your projects with just one command.

xctestplanner selective-testing -f {testPlanPath} -p {projectPath} -t {targetBranch}

The Need for Selective Testing

In testing, the focus often shifts towards writing more tests rather than ensuring the right tests are being run. More tests, more coverage, and more assertions. However, increasing CI resources to match this growing number of tests isn’t always possible. Moreover, slow feedback can negatively affect developers’ experience and undermine their perception of the testing process.

No matter how small the changes are, running all unit tests for every commit uses a lot of resources and slows down feedback, which affects development speed. At Trendyol, where our suite of over 30,000 unit tests continues to grow, this approach was becoming a serious risk to our testing and development processes. To solve this, we switched to selective testing.

Evolution of xctestplanner:

xctestplanner was initially developed to meet a straightforward need: managing Xcode test plans directly from the command line. In iOS projects, test plans (JSON files with a .xctestplan extension) are essential for organizing and running tests with different configurations. However, there was no built-in command line interface for editing these test plans on the fly in CI pipelines. XCTestPlanner filled this gap by allowing the addition or removal of objects from the JSON file, using Swift and ArgumentParser.

The primary use case for xctestplanner at Trendyol was to enable dynamic editing of test plans in CI pipelines. This capability allowed us to automatically mute flaky UI tests or let team members skip and unskip tests via Slack commands without modifying the code. The initial commands added were select and skip.

You can check out this article to explore more about these strategies:

Implementing Selective Testing

As we faced the challenge of an ever-growing number of tests slowing down our development process, we recognized the potential to improve xctestplanner’s capabilities. To implement a selective testing strategy, we introduced the select-target command. This command allows us to specify and enable only particular test targets in our test plans.

xctestplanner select-target -f filePath XModuleTests YModuleTests

The Recipe:

Before developing our selective testing tool, we researched existing methods. Many tools, including the Tuist Test, use a technique that involves creating hashes for each module. When code changes are detected, these tools check which hashes have been changed. The Tuist Test command also creates a new scheme with only modules with broken hashes and runs tests on it. However, we still needed the entire app build for our UI tests because we built the iOS app once, and all test pipelines ran on this single build.

So we decided to start with a simpler approach using git diff.

let gitDiff = try executeShellCommand("git diff --name-only \(targetBranch)")
let moduleNames = findModuleNames(in: gitDiff)
let dependentModules = findDependentModules(for: moduleNames)
let combinedModules = Set(moduleNames + dependentModules).sorted()
let selectedTargets = selectTestTargets(combinedModules) //select-target command
  1. Identify Changed Files: Use git diff to identify the files that have changed relative to the target branch (typically origin/develop).
  2. Determine Affected Modules: Map these changed files to their respective modules.
  3. Trace Dependencies: Use the Tuist graph to find the dependent modules of the affected ones.
  4. Select Relevant Tests: Enable test targets only for the modules associated with the changed files.

Applying Selective Testing to Your Project

To simplify the integration of selective testing into pipelines, I’ve combined these steps into a single command that does the entire process explained above (You can visit github repo for more details)

xctestplanner selective-testing -f {testPlanPath} -p {projectPath} -t {targetBranch}

Once we realized this approach was feasible, we decided to start with a proof of concept.

The Experiment (POC)

Selective Testing Grafana Dashboard

For our proof of concept, we decided to execute tests for the first commit both in full and selectively to ensure the process worked as intended. Additionally, we collected and analyzed data on selected tests and their durations over one month and used this information to refine our strategy.

Initially, we were running tests for dependent modules of affected modules using Tuist’s graph command to identify and include these dependencies. However, after experimenting, we found that with our unit tests — all dependencies being mocked — we didn’t need to focus on dependent modules. Over two months of testing confirmed there were no issues missed, allowing us to concentrate exclusively on modules directly impacted by code changes.

Key Benefits

Our initial implementation of selective testing reduced our pipeline’s unit test execution time by 73.7%. However, we noticed that when merge requests were behind commits compared to the target branch, we were running more tests than necessary.

To address this, we enhanced our existing auto-pull job to run every time the pipeline was triggered. This adjustment brought our average test execution time down from 14.5 minutes to less than 3 minutes, achieving an overall reduction of approximately 80%.

Beyond the impressive 80% speed increase, there is an even more significant benefit. Despite our unit test count growing from 29,848 to 30,974 in May, our average test execution time remained the same at 3 minutes. Normally, it would have increased from 14 to 16 minutes. This indicates that our improvements are persistent, and we won’t need to allocate additional resources to unit test execution in the future.

Future Plans

We are currently experimenting with various approaches to apply our selective testing strategy to UI, Smoke, and Snapshot tests. Unlike unit tests, these tests are not contained within the same modules. Therefore, for UI tests, we need to map them and, unlike unit tests, recursively select dependent modules since they run directly on the application without mocking other modules. This involves utilizing Tuist’s graph to identify and include these dependencies.

After refining our approach in production over the next few months, we plan to share our findings and detailed process in a follow-up article. Stay tuned for updates by following us.

Want to work on our team?

Do you want to join us on the journey of building the e-commerce platform that has the most positive impact?

Have a look at the roles we’re looking for!

--

--

Atakan Karslı
Trendyol Tech

Senior Developer In Test @Trendyol | Curator @Testep