How to Make UI Tests in iOS Less Annoying
UI testing in iOS is often criticized for being too slow or hard to maintain. At Porsche Digital, we found a way to avoid the pitfalls and make UI testing more smooth. Here’s how — explained by Axel Hodler who is part of the Software Engineering Chapter at Porsche Digital and has a strong focus on thorough test automation, test-driven development, and keeping things simple.
In comparison with unit tests, the field of UI testing often gets a bad reputation. Depending on the test suite UI tests might be slow, brittle and hard to maintain. Sadly, these drawbacks often turn developers to the decision of not working with UI tests at all. Especially in iOS, where UI tests seem to be less commonly used than in web apps.
At Porsche Digital, we are well versed in building iOS apps. Finding new digital businesses, improving corporate processes, and enhancing the experience of our customers is often accompanied by creating or extending an iOS app. We have high standards and UI tests are tremendously helpful in making sure the user workflow in our app runs as intended and keeps it frictionless even when introducing changes and extensions — all on multiple devices. UI tests are also extremely helpful in fixing bugs where the cause is not immediately clear.
In short, we proceed as follows:
- We write a UI test to reproduce the issue in an automated way.
- We drill down to the root cause by adding unit tests on the way and adapting the implementation.
- As soon as the UI test passes we can consider the issue fixed.
Tricks and strategies for better UI tests
To improve our workflow we have identified a few tricks and strategies to help us avoid common pitfalls when working with UI tests. Here, I’d like to offer solutions to avoid the drawbacks mentioned before.
1. Using “waitForExistence” instead of “sleep”
Sooner or later we will run into timing issues in UI tests: the dreaded “sometimes it fails!”.
Our tests interact a lot faster with the UI than any human could. Thus, tests will attempt to interact with elements not yet rendered on the screen.
A simple solution would be to add a sleep
to give the element some time to appear. The test interacts with the view, triggering the creation of an element. Then the test sleeps. Until finally it interacts with the element.
We can take the popular alert as an example:
This works, but sleep(3)
will wait for a full 3 seconds.
Another solution would be waitForExistence
. It uses polling and will be much faster than 3 seconds since it waits until the element is visible. It could even be milliseconds. If the element would not exist, the call would fail after waiting for three seconds.
In the code below we remove the sleep
and use waitForExistence
.
Working with waitForExistence
helps a lot when dealing with flaky tests as it allows our tests to be a lot faster than using sleep
. Additionally, we tend to increase timeouts again and again when experiencing flaky tests. What was once a timeout of 1
might soon become a timeout of 4
or 5
— all because the test was failing one time too many. Here waitForExistence
will help us save the difference. We keep the test stable by stating a timeout of 5
but only have to wait one second or less most of the time.
2. Keeping the code readable
The quality of our tests should be held to the same standard as our production code. We start with improving the tests step by step considering the timeout code above: Quickest win first. It turns out that 3
is a magic number and we can extract a variable.
var defaultAlertTimeout: Double { 3.0 }
We change the line waiting for the alert to appear to
_ = alert.waitForExistence(timeout: defaultAlertTimeout)
We may have multiple timeouts for external requests, such as loading websites or for calculations.
We can even extract the three lines of confirming the alert via "OK"
into its own function. Instead of reading three lines the developer only needs to read one line explaining what the function does. An example would be confirmLocationNotAvailable
.
After all, we might not even care too deeply about how to confirmLocationNotAvailable
is achieved in the UI. Our assertions might refer to something else: Among other things, they could show the user how to use the app without a location. Regarding the choice of UI elements, we might use an alert in the beginning, because it is easy, and later switch to a better UI representation.
Multiple tests could be using the helper function. If things ever change, we would only have to change the helper function - not every test using the UI interaction.
The simplest way to group these functions is by moving it into an extension of XCUIApplication
.
We can now conveniently reuse them in our tests. As the extension becomes too large we can consider moving toward PageObjects to have calls such as map.confirmLocationNotAvailable
.
3. Prepare the app state programmatically
Apple states in the docs:
UI testing exercises your app’s UI in the same way that users do without access to your app’s internal methods, functions, and variables.
Therefore, we cannot mock or stub specific functionality. Examples would be to show an onboarding screen or to display a list of documents the user has created in our app.
What we can do is extending the app to allow us to do some setup if the app is running under tests. We might be able to achieve the same by setting up the state by going through the UI. However, this usually takes longer. Some functions might not be reachable via UI simply because the user might not be able to do it.
We can use compiler flags for this. The block in #if DEBUG
will only be compiled when DEBUG
is true. During testing it is true, but in the app, for the App Store, it is not.
Instead of only interacting with UserDefaults
, as shown in the snippet above, we can do a lot more such as using our services and delete, store, or edit documents.
Next, we make sure to callresetOnboardingStatus
in AppDelegate
If we want to reset the onboarding status we can add the specific launch arguments in our tests before launching our app.
We should not forget to move these calls into helper functions.
4. Use the recorder
After a while, we will be mostly reusing existing concepts. We will have learned how to press buttons, navigate back, change tabs, handle alerts, and fill out web forms.
In the beginning, the whole syntax is not trivial though. Therefore, it is helpful to use the recording tool to learn how certain interactions will look like in code. This is especially useful when attempting something like an OAuth 2.0 login via SFSafariViewController.
We press the red button, interact with our app, and see the interactions get added to our test.
The output is generated, but it might not be the most readable code and complies with our standards. However, it is possible to convert them into readable functions in our extension.
5. See the test fail first
When we follow the way of test-driven development we start with a failing test. The reasoning is to build just enough functionality to make the test pass. Afterward, we refactor if necessary and create another failing test.
With UI tests we may have already done some implementation. Even then, it makes sense to see the test fail first to make sure it is testing the correct thing. Sometimes it passes because we chose the wrong assertion.
A simple example would be a functionality where a Modal View, or a sheet
in SwiftUI, opens. After doing something on the Modal View we expect some action to happen and for the sheet to disappear. We might opt for an assertion that checks if we are back on the view where the modal did appear.
We might want to use the title of the navigationBar
to assert where we are currently located in our app.
XCTAssertTrue(navigationBars["Settings"].exists)
The issue is that our Settings
view did already exist while the modal was visible because it was behind it. Thus, the assertion would always pass, even if the Modal was never dismissed.
What we really want is the following assertion
XCTAssertTrue(navigationBars["Settings"].isHittable)
isHittable
makes sure the view is visible to the user.
We see the test fail, for example by not triggering the action that dismisses the modal. This would have taught us that checking exists
is the wrong thing to do here.
Closing Notes
In the beginning, our UI tests were few. After a time, more tests will be added and the time to run the whole test suite will increase. The duration of the test suite is going to extend the length of the feedback loop. It’s important to get feedback quickly if something breaks.
Once the team reaches that point, it is useful to evaluate which tests are essential and which are not. Since some tests might be covering the same functionality, it is helpful to ask the following question:
Which features are used by 90 percent of the users and which are usually left untouched?
Possibly, some tests can and should be moved into faster unit tests. Maybe trading the length of the feedback loop with the added security that everything works as expected is fine.
Looking forward to your thoughts and feedback!
We’re curious about your thoughts regarding UI testing and how it could be improved. How does your team handle the topic? How do you make sure the app works as intended? Let us know in the comments or on Twitter.
Thanks also to my colleagues Konstantinos Tsoleridis, Alina Dier and Christopher Golombek for detailed feedback on drafts of this post.
Axel Hodler is Senior Software Engineer & Technical Lead at Porsche Digital.
About this publication: Where innovation meets tradition. There’s more to Porsche than sports cars — we’re tackling new challenges, develop digital products and think digital with a focus on the customer. On our Medium blog, we tell these stories. It’s about our #nextvisions, smart technologies and the people that drive our digital journey. Please follow us on Twitter (Porsche Digital, Next Visions), Instagram (Porsche Digital, Next Visions, Porsche Newsroom) and LinkedIn (Porsche AG, Porsche Digital) for more.