Expressive View Model Unit Tests at Grailed

DJ Mitchell
Grailed Engineering
8 min readSep 15, 2021

Unit testing our view models is a cornerstone part of Grailed’s strategy to ensure the stability of our app and deliver a consistently great user experience to our users. They allow us to move faster and add features or refactor with more confidence because we know that our significant suite of tests will always have our back.

Because these tests are so important to us it’s key that we always take a critical eye to them to make sure we get as much out of them as possible. Over the years since unit testing became a routine part of our development cycle, the iOS team settled on a few principles that we think have strengthened the utility of our test suite.

This article explains how we took those principles, our experience with our existing form of view model tests, and recent developments in the iOS community such as the release of Point-Free’s ComposableArchitecture and improved the expressiveness and quality of these tests by an order of magnitude.

But before we get to the new stuff, it’s important to understand a little bit about the road that got us here, as well as the principles that have guided us.

Where We’ve Been

Our Architecture

To understand how we test our view models it’s important to understand how we write our view models. If you haven’t already read our previous post on this subject I recommend doing that to get a deeper understanding of why we choose to write them in this style.

For the purposes of this article I will adapt the previous article’s code to include the slight tweaks for how we would write them now and build off of that.

In this example, we are taking in some inputs that represent user actions, and we return some outputs that perform very simple UI side effects.

One thing that’s worth calling out explicitly is our use of Current. This is a global container for all app-wide dependencies. It’s designed in such a way that it’s easy to mock our dependencies without needing to manually inject each dependency into each view model that needs it. If you want to learn more I recommend checking out this NSSpain talk, or this Point-Free video (subscriber only).

In order to test this view model we need to capture the emissions of each of our outputs and assert that their values are what we expect as a result of the inputs firing. There are many ways to do this in RxSwift but almost all of them involve using RxTest and its TestableObserver to capture the output states. After some experience using RxTest with our view models we derived some principles that we believe are important to having high-quality tests for our view models.

Our Principles

Prefer exhaustiveness wherever possible

In our early experience testing our view models we encountered instances of us testing a flow but those tests not being powerful enough to catch bugs. This was often because we had tested one output but there was a bug in another output in that same flow. For instance, a test where the user taps a button to trigger a network request but that request fails. We would have tested that the error banner shows up correctly, but miss the fact that the button didn’t become re-enabled after the request finished in the error case.

Because of this experience we learned that we should be testing every output when we’re writing our tests, even if you don’t expect that output to change. It’s important to express all expectations about the code under test because it can help future maintainers understand those expectations and know how to evolve them to match new requirements.

We should be testing every output when we’re writing our tests, even if you don’t expect that output to change.

Test each intermittent step

Similar to the previous principle this was learned through experience. In our view model tests, as you’ll see below, we normally write tests from the user’s perspective, with the tests representing as a series of user actions. We encountered times when not testing the state of the view model at each step would leave us having missed that the ordering of the events wasn’t what we expected or intended. As a result we settled on the principle that we should test the outputs at each step in the flow to give us higher confidence that the ordering of events is always what we expect.

Be willing to tolerate boilerplate in order to achieve other principles

We decided that we were willing to sacrifice some conciseness in order to get robust testing in return. This is a tough tradeoff to make because no one likes writing boilerplate, but it’s one that we felt was right in order to achieve correctness and thoroughness.

Our View Model Tests

Next we’ll talk about how we’ve been writing view model tests for the past few years. In the code below you’ll be able to see evidence of each of the principles.

There are a few things that you might have noticed in the code sample above. The first is that we use Quick and Nimble to write BDD-style tests. We think BDD testing is great in theory, but in practice it has its own problems which we’ll talk about later.

Second is that we’re testing a two user flows (the successful and failed responses when logging in) written as a set of four separate tests, one for each step in the flow. Not only that, but at each step we test every single output. This is us putting our principles into practice in order to have high confidence that our tests are verifying as much of the surface area of our view model’s logic that they can.

The third and maybe most obvious thing is that the code is very noisy. There’s the setup of the inputs and outputs, the repetition of the setup of each test, then the assertions against every output at each step (even if the output’s value hasn’t changed between the previous step and this one). As we outlined in our principles we are willing to accept some boilerplate in order to make our tests more robust, but when our view models become larger the difficulty to properly maintain the tests for those view models grows quadratically.

One of the things that becomes the most difficult about maintaining a complex test suite is that because we assert against every output at every step, it’s difficult to know which of the outputs are the important things that should have changed as a result of this step in the user flow, and which are the ones that are only there because we want to make sure they haven’t changed. This presents a problem when you’re modifying an existing test suite, because it might not be clear specifically what the test is intended to be testing. We know that it tests every output but without deep knowledge about the intent of the view model, it’s difficult to know why something is failing. Our old test pattern hides that information in a sea of boilerplate, which makes the entire suite more difficult to maintain.

This insight about the maintainability of our test suite led us to look elsewhere for new ideas about how we might keep the things we like about our testing style, but replace or abstract away the parts that we don’t like.

The Composable Architecture

Many of us at Grailed keep up with Point-Free’s excellent videos of functional programming in Swift and we have a membership for every member of the iOS team, so it’s no surprise that we looked to their work on testing in The Composable Architecture (TCA) for inspiration. (If you haven’t sprung for a Point-Free membership yet, do yourself, your team, and your career a huge favor and do it as soon as you’re done here. You won’t regret it.)

In particular, the expressiveness and exhaustiveness achieved out of the box by TCA’s TestStore showed us a new ceiling for how good tests could be.

Our New View Model Tests

Inspired by TCA’s TestStore, we wrote our own TestHarness which abstracts away some of the most painful parts of our old setup, while improving and even guaranteeing the exhaustiveness we were previously achieving through coding conventions. Here is the previous example rewritten using the TestHarness.

There’s a lot going on here and it’s hard to know where to start, but maybe simplest is the best — we’re no longer using Quick to write our tests. As I hinted at above, we’ve found that in practice the things we like about Quick are outweighed by Xcode’s poor support for some of its core features. Instead of swimming against the Apple tide in this case we decided it would be best for us to migrate to using XCTest directly.

The details and process of how we arrived at certain pieces of syntax (hello, custom operator!) will be covered in the next post, but for now let’s focus on the changes to the test itself.

Rather than writing each step as a separate test and having to repeat all of the related set up every time, we’ve instead written the entire user flow in a single, very powerful, test.

Something that might not be obvious from the code is that the TestHarness is dealing with the burden of ensuring exhaustiveness. We no longer need to write tedious code in every test that ensures each output is correct. At each step we only have to describe how we expect the state to change, which means we’re no longer required to make any note of what didn’t change. We’ve centralized a small amount of boilerplate in the setUpTestHarness method instead of spreading it through each of our tests.

Being able to centralize all of this work inside the TestHarness also allows us to provide additional capabilities that we previously didn’t have:

  1. Our ViewModelState is Equatable by default, therefore the harness will fail if even a small piece of data doesn’t match your expectation. In our previous pattern, if we added a new output to our view model we might have forgotten to add the expectation to our tests. Now the harness forces us to handle it.
  2. The TestHarnessis aware of our test schedulers, so it can make extra guarantees about how we handle advancing the scheduler so that we’re more precise about when we expect specific state changes to happen.
  3. When reaching the end of a call to assert, all state changes must have been handled. This prevents us from accidentally missing some critical assertion at the end of our test that might have otherwise slipped through and caused a bug.
  4. In our view model patterns, Observables should never emit errors because those errors will kill the subscriptions and can cause unresponsive UI and a bad user experience. The TestHarness helps us by automatically failing if any error is emitted by any of our outputs.

The cumulative effect of these improvements also has given us something that is unquantifiable — our tests are much easier to write and maintain. This has lead to us being willing and able to capture more edge cases in our tests because they are so much easier to write and understand.

Next Time

In the next post I’ll dive into more detail about design the API of TestHarness itself. We’ll also cover how we use Krzysztof Zabłocki’s Sourcery Pro to handle generating the boilerplate and scaffolding for these tests, which makes them even easier to write.

If solving problems like this in a growing team with huge opportunity for impact interests you, we’re hiring!

--

--

DJ Mitchell
Grailed Engineering

Staff iOS Engineer @Grailed, cocktail enthusiast, functional programming freak