Google I/O 2018 app — Architecture and Testing

Jose Alcérreca
Android Developers
Published in
7 min readAug 20, 2018
Illustration by Virginia Poltrack

The Google I/O app is an open source project which shows the schedule and information for the annual Google I/O conference. It is aimed at attendees and remote viewers of the event.

This year we had the opportunity to start from scratch. We could choose tools and think about the overall design of the app’s architecture. What is the best architecture? The one that makes the developers on your team the most productive. They should be able to work without reinventing the wheel, focusing on user features. Also, developers should be able to get prompt feedback on their work. Most team members developing the app are 20%ers — Googlers dedicating 20% of their time to the project — and they come from different teams, backgrounds and timezones, something to take into account when making these decisions.

Architecture

A project developed by a diverse team needs clear guidelines on how to face common problems. For example, developers need a way to get off the main thread. It makes sense to provide a framework to do this consistently. Also, a good architecture should be hard to break: defining the layers of an app and clearly describing their relationships, avoids mistakes and simplifies code reviews.

We took concepts from Clean Architecture to solve these two problems (layering and background execution). The app is divided into a three layer structure:

  • Presentation layer (Views and ViewModels)
  • Domain layer (use cases)
  • Data layer (repositories, user manager)

The presentation layer cannot talk to the data layer directly. A ViewModel can only get to a repository through one or more use cases. This limitation ensures independence and testability. It also brings a nice opportunity to jump to a background thread: all use cases are executed in the background guaranteeing that no data access happens on the UI thread.

General architecture of the app

Presentation layer: Views + ViewModels + Data Binding

ViewModels provide data to the views via LiveData. The actual UI calls are done with Data Binding, relieving the activities and fragments from boilerplate.

We deal with events using an event wrapper, modeled as part of the UI’s state. Read more about this pattern in this blog post.

Domain layer : UseCases

The domain layer revolves around the UseCase class, which went through a lot of iterations. In order to avoid callback hell we decided to use LiveData to expose the results of the UseCases.

By default use cases execute on a DefaultScheduler (a Kotlin object) which can later be modified from tests to run synchronously. We found this easier than dealing with a custom Dagger graph to inject a synchronous scheduler.

Data layer

The app dealt with 3 types of data, considering how often they change:

  • Static data that never changes: map, agenda, etc.
  • Data that changes 0–10 times per day: schedule data (sessions, speakers, tags, etc.)
  • Data that changes constantly even without user interaction: reservations and session starring

An important requirement for the app is offline support. Every piece of data should be available to the user on first run and even with a spotty Wi-Fi connection (we can’t assume perfect coverage at the venue and we should assume that many visitors will turn off their roaming data).

The static data is hard-coded. For example, the agenda repository started as an embedded JSON file but it was so static we translated it to Kotlin for simplicity and performance.

The conference data comes in a relatively large JSON file (around 600Kb uncompressed). An initial version of it was included in the APK to achieve full offline support. The app downloads fresh data from a static URL whenever the user refreshes the schedule or when the app receives a Firebase Cloud Messaging signal with a request to refresh. The conference data is downloaded inside a job managed by a JobScheduler to ensure that the user’s data is used responsibly.

The downloaded JSON is cached using OkHttp so the next time the app is started, the cached version is used instead of the bootstrapped file. This approach relieved us from dealing with files directly.

For user data (reservation, session starring, uploading Firebase tokens, etc.) we used Firestore, which is a NoSQL cloud database. It comes with offline support so we were able to sync user data across Android, web and iOS effortlessly.

Sign in support was implemented with Firebase Authentication. An AuthStateListener indicates when the current user has changed (from logged out to logged in, for example) using a LiveData observable.

Libraries and tools

We decided to avoid using non-stable dependencies, so Coroutines, the Navigation Component, and WorkManager weren’t used.

Apart from the tools already discussed previously, some notable mentions are:

We made extensive use of LiveData to create a reactive architecture, where everything is wired up so the UI is updated automatically when data changes. You can find more about using LiveData beyond the ViewModel in this post.

Gradle modules and code organization

Having a good modularization strategy is essential for a good development experience. In fact, dependency problems are usually a sign of a bad architecture or modularization approach. We created the following modules:

  • model: contains the entities used across the app
  • shared: business logic and core classes
  • mobile: the mobile app, including activities, fragments, ViewModels and UI-related classes like data binding adapters, the BottomSheetBehavior, etc.
  • tv: the Android TV app
  • test-shared: test data to be used from all unit tests in all modules
  • androidTest-shared: utilities to be used from all UI tests in all modules

The general rule here is to create as many modules as possible to improve encapsulation, normally resulting in faster incremental builds. In our case, the shared and mobile modules could be split further.

Testing and flavors

Before feature development, a lot of effort was put into making the app testable. Developers are only productive if they can get early feedback on what they’re doing, not depending on others.

Unit tests

The architecture and modularization approach allowed for good testing isolation, faking dependencies and fast execution. Domain and data layers are extensively unit tested. Only some util classes in the presentation layer are unit tested.

We initially avoided mocking by not adding Mockito to the project. We used interfaces wherever possible and faked the dependencies from tests, which is much cleaner than mocks. However, Mockito was eventually added to create mocks of external dependencies. We used Mockito-Kotlin for a more idiomatic experience.

We used an internal Continuous Integration tool that rejected changelists that broke the build or unit tests. Having this is vital in a project with so many contributors and particularly important when having multiple variants, as Android Studio only builds the active one. For Github, we added Travis CI.

UI tests

We made sure Espresso did not require Idling Resources so the UseCase framework provides a way to set a synchronous task executor. Also we ensured that tests would run hermetically: using fakes to avoid using flaky dependencies, like the network. Preferences, time and task schedulers were all modified from tests using JUnit rules, which provided stability and repeatable tests.

UI tests are run only on the staging flavor. This special variant of the app always fakes a logged-in user and doesn’t make any network requests. This is also useful to make iterations faster when testing the app manually.

Maintaining UI tests when the UI is under heavy development can be a burden. We planned to introduce them after we incorporated the UI designs. However, the UI tests were delayed and we released the first version without a proper suite in place. This led to a couple of crashes in production that could have been avoided by simply running a happy path (tests that only care about the normal operation, but not the less frequent interactions) on Firebase Test Lab.

In an ideal world we wouldn’t have to modify our releasable code just for tests but we did have to make a couple of changes [commit]:

  • We had to add a way to disable animations in the BottomSheetBehavior class since it doesn’t use any of the animation frameworks (so the animations are not disabled automatically).
  • We had to add a function to execute when certain animation was done in the EventFilterView class.

Separating our architecture into distinct layers and documented each’s responsibility worked out well for our distributed team of contributors. For example mandating the UseCase framework for retrieving data from repository layers made getting off of the main thread the default behavior, avoiding introducing jank from the start rather than having to chase it down later on. Also, most testing problems are architectural problems in disguise; laying a good foundation is essential for building a sane testing experience.

Iosched is a real app with real users… and a very real deadline. As such there are areas we want to continue working on to keep the codebase healthy, maintainable and make it a better example app. For example the ScheduleViewModel grew organically and could do with breaking up. We plan to improve the app in the open, adding new Architecture Components, fixing problems, refactoring and increasing code coverage.

Feel free to open issues if you find problems or want to contribute to the project!

--

--

Jose Alcérreca
Android Developers

Developer Relations Engineer @ Google, working on Android