MVP & Lifecycles & Dispatchers Oh My!

Mike Nakhimovich
S23NYC: Engineering
6 min readDec 13, 2017

--

Recently Brian Plummer and I started working at Nike. After a chance to settle in, we were tasked with creating a library that would be used across a variety of Nike’s Android apps, particularly my favorite Nike app SNKRS.

At a high level, the task was to create a flow that a user can navigate through. We chose to go with a single Activity architecture backed by MVP with lifecycle-aware presenters and a reactive dispatcher of state changes. We wrote our library completely in Kotlin. In this article we will explain how we did it.

Like all architectures, this one is still a work in progress but nonetheless we wanted to share our thought process. This architecture allowed us to build an augmented reality flow encapsulated in a single activity with six screens and presenters . In the past we have always ended up with a tightly coupled mess with presenters injected into each other or views nested in an unscalable way. This time we were able to build a reactive architecture where each presenter and view reacts to new state from a centralized dispatcher. As a result, we achieved our goal by decoupling our views and presenters.

We have always been big fans of MVP. It is the architecture that always seemed easiest to explain to a new dev:

  • When a view inflates, it creates an instance of a presenter and attaches to it.
  • Whenever the view needs to interact with anything but its own layout it calls a function on the presenter and then waits for a response through a view interface.
  • Presenters don’t have any Android framework code in them and can be easily tested in the JVM using a test implementation of the view interface.

Here’s an example of a non lifecycle-aware view and presenter:

View Interface

View implements our interface and attaches to our presenter

Presenter

Since presenters are in non Android land, we need to call presenter.attachView and presenter.detachView as the presenter has no idea when the view is inflated or deflated. If we do not call detach we risk leaking memory when a presenter is bound to a long running operation.

Our mvpView can be an activity, fragment or view. Its main purpose is to implement an interface that the presenter will use to tell the view what to render. If I were writing this a year ago, I’d say, “We have a nice presenter here!” but this is the year 2017 (almost 2018!) and we now live in a world where lifecycle-aware components are a thing, most importantly SupportActivity implementing LifeCycleOwner. Since Activities are a LifecycleOwner and views are able to get a reference to their containing activity, we made the decision to have our presenter attachView function take in a lifecycle and register with it.

This was made possible by having our Presenter implement LifecycleObserver.

Now that our Presenter is listening to the activity’s lifecycle, we can have our detachView listen for our activity’s ON_PAUSE lifecycle which means we never have to call it explicitly. It is nice to have this auto detachthat we won’t need to remember to call from a view.

Now, our views no longer need onDetachFromWindow as our presenter can detach the view from themselves. The general flow of our UI will be:

  • Activity is created
  • Activity inflates its views
  • Each view creates a presenter and attaches itself.
  • The presenter requests any necessary data and passes it to the view.
  • When our activity is paused our presenter will detach the view from itself preventing memory leaks.

We now have a NoNetworkView that — you guessed it! — we want to show anytime we have problems with network connectivity. We will need to show our NoNetworkView anytime someone loses connectivity, regardless of which screen they are on. What we did NOT want to do is inject the NoNetworkPresenter into any other view/presenter that can show noNetwork since we felt that this will eventually lead to a circular dependency if, for example, the SuccessPresenter needs to be able to call the NoNetworkPresenter, which in turn needs to be able to call something on the SuccessPresenter. Instead, we want to leverage RxJava and have each of our presenters react to a state change that will tell them whether to render or hide their UI. What we did was make a reactive dispatcher for our presenters to listen to. Here’s what the API from our presenter looks like:

In the example above, our presenter is listening to a stream called noNetwork . When an event is emitted, it tells its view to render itself.

Similarly, the presenter listens for the inverse of our event, which is really any other showing event, and hides its view.

Let’s take a look at our dispatcher next.

Our dispatcher has a few functionalities:

  • We can use it to dispatch a new state which tells a screen to do something. For example, dispatcher.dispatch(State.ArTargetFound) will tell our AR presenter to display a found button.
  • We can use it to dispatch a new state that needs to show a new Screen. dispatcher.dispatchShowing(Showing.NoNetwork) sends a new showing state to anyone observing the dispatcher. This state change is also added to our showEvents stack. (We’ll come back to this in a minute).
  • Presenters can then listen for a particular state change (with an associated payload) and react accordingly. For example the no network view can be listening to dispatcher.noNetwork()

The landing view can be listening to dispatcher.showingLanding.subscribe()

We used Kotlin sealed classes to represent the different states that we can dispatch. Using sealed classes allowed us to represent some states as objects while others can be data classes that carry a payload.

Using a state dispatcher allowed us to decouple our presenters from each other, which meant we did not need to nest presenters into each other or have complicated logic that defines when views should be shown or hidden. Each presenter will listen for events that they need and also listen for the ones that they should be closing on. Take a look at what happens when our ARCamera attaches to its presenter:

At the same time that the ARView is visible, we might want to show an overlay or some additional controls. We can accomplish this with an OverlayPresenter that listens for similar events:

The overlay presenter might also care about other states, so it can react to those as well:

The last part of our architecture is to add all our various views to a single layout with visibility:invisible so that they can all be inflated when our activity starts. This in turn calls each presenter’s attachView function which will then subscribe to any events that the view needs to react to. As a user flows through our AR Activity we show and hide views based on states changes that are dispatched. When one of our lucky users finds an AR target in the wild, we can resolve their location and then dispatch a State change that will drive which screen is shown next:

https://gist.github.com/digitalbuddha/48a7758f68ef4b4a323d18f1d03dfe2d

Check out how simple our Success and Failure Presenters are:

The only remaining piece is dealing with backstack. Any view/activity/presenter can call dispatcher.goBack() which will dispatch the last showingState prior to the current one, which should both make your view disappear and another view appear. If there are no more showing states on our backstack we dispatch a State.BackStackEmpty which our activity listens for:

https://gist.github.com/digitalbuddha/973cfe4edd5052e76cfc9572b36b201b

ContainingActivity:

Just like the regular Android backstack, our dispatcher backstack is abstracted away from our views. No view should know what happens when it is finished — all it knows is that it gets shown or hidden. What’s nice about this is that we record each state change along with its payload into a single Stack structure called showEvents. A view doesn’t need to know if it’s being displayed for the first time, or being resumed by a backstack traversal. Incidentally, our state changes are Kotlin data classes with payloads that will be pushed with each state when you go back in reverse.

So that’s our new Friendly MVP! It works for us and we hope it works for you as well. Next post we will dive into how we implemented Augmented Reality with targeting and 3D models in under a month with the help of….a JavaScript library (gasp!)

Here’s a gist of some base classes to get you started. We hope to roll this into a library once it is more ironed out.

Like solving interesting problems in novel ways? We’re hiring at Nike’s Digital Innovation Lab located in the heart of the flatiron district. contact me at mike.nakhimovich@nike.com

--

--