How to Build Mobile Software with a Growth Focus

Benjamin Grol
8 min readAug 29, 2017

--

Note from me: this article is coauthored by me, Benjamin Grol (Partner and the Head of Growth at Atomico) and Hemant Bhonsle (Mobile Engineer at Zenefits). If you’d like more analysis and insight like this, delivered monthly(ish) to your inbox, sign up to The Operator’s Manual (our newsletter) here.

This is the first post in a series focusing on Growth, data analytics and best practices for scaling, aimed at post product-market fit digital businesses. We’ll mix it up with a variety of formats, including content pieces and interviews with operators sharing challenges and learnings — if there’s any topic you’d be interested in, please let me know!

I recently started running a Growth Meet-Up with Joey Kotkins and Andy Young to bring together other practitioners in our industry.

Software creation has fundamentally changed in the past decade. When third-party mobile apps were first available via the App Store in 2008, the way we use software changed forever, mostly in a good way. However, some things were sadly left behind for most mobile apps; fast release cycles, server-controlled features, and A/B tests. These were quite common for many web products leading up to the mobile revolution, but were generally not a part of the first wave of mobile development and are only coming back in full-force today. It makes me sad to think of all of the lost productivity and increased customer pain due to this.

This post will be a high-level overview of how to build mobile software in a way that is tuned for speed and learning, two things that are essential when you are trying to grow your business, and are core tenets of Running Lean.

Tenet #1: (Bi)weekly releases on mobile and weekly execution sprints

If you are a small company, let’s say fewer than ten mobile engineers or so, a biweekly mobile release is a good starting point. As your engineering team gets bigger, a time will come when moving to a weekly cadence can likely accelerate your progress.

To be clear, the goal here isn’t to push the same binary out the door every week, it’s to launch at least one new hypothesis test to learn how to best serve your customers. Prioritizing which tests to run could be its own post, but, in a nutshell, look for the cheapest-to-build validation of the existential questions you have about your product or service. Here is a great post from Sean McBride describing one approach for prioritizing experiments.

In order to release at this kind of cadence, a weekly sprint is the recommended path. On a Thursday or Friday do a sprint planning meeting for the following week. Then in the sprint planning meeting:

  • Do a quick pass of your (well-maintained) backlog with the team. Should take 15 mins max.
  • Consider what you’ve learned so far this week. Any course corrections?
  • The team then discusses what to do next week.
  • Each task is picked up by an engineer who commits to getting it done the following week.
  • There is typically a “demo meeting” as the week progresses where the engineers can show the finished work (or whatever progress they have made). This is critical to drive focus and accountability.

With all of this in place, on Monday morning, everyone knows exactly what they are doing for the week. The feedback I’ve gotten from teams who have shifted to this model is that this level of clarity makes people happier. They know exactly what is a priority and why, with a clear target to hit.

To recap, the positive effects of a fast release and execution cadence:

  • Faster progress — More time between releases may lead to run-ins with Parkinson’s Law. It is an invitation to swap progress for motion.
  • Higher product quality — If you are releasing biweekly, identifying root causes of issues is typically much easier. There can be exponentially more software interactions in an 8 week release cycle vs a 2 week release cycle
  • More individual focus — Weekly sprints drive focus, accountability and happiness
  • Faster learning — As we’ll cover in tenet #3, launching at least one meaningful experiment a week will make your product or service better sooner.

Note: when launching weekly, it’s common to have a release candidate build that is used internally (or a beta group) for a week, and *then* actually launched in the wild. As the releases stack up, you have a one-week phase shift, but the steady flow is still weekly. Quality is critical to maintain, of course.

Tenet #2: Control mobile client features with server controls

One very important aspect of native mobile development today is maintaining server side control of different client traits. This has become such a norm over the last few years that many businesses have sprung up around this concept, allowing developers to easily configure their applications to read flags from the server and adjust their UI/UX accordingly and only show the appropriate features. Many large software companies build systems that do this from scratch. Facebook had a few systems that did this, including Gatekeeper (GK) and Quick Experiments (QE). Similar systems have been built at Uber, Pinterest, Zenefits, etc. Some examples of readily available tools that do this include Optimizely, Mixpanel, Apptimize, and Firebase.

There are a few reasons why server controlled client traits are becoming the standard for mobile development:

  • Better customer experience — Due to delays between app submission and availability for your customers, client crashes and bugs could keep occurring until a hotfix is published. With server controlled client traits, these kinds of failures can be mitigated (e.g. a new feature causes a crash on the Nexus 6 device, so the feature is turned off at the server level for all users with this device and the crash stops). This is arguably less of an issue for Android apps, however, there may still be a few hour delay between submission and release. Also, some users don’t have auto-update turned on.
  • More installs — In a world where the app crashes on start for a day or so, the app may get many negative reviews. This can lead to a lower ranking on the Google Play Store/iOS App Store and a lower conversion rate from the app page to app install (e.g. I’m more likely to install an app with a 4.5 star rating than a 3.5 star rating). This doesn’t even factor in customers you have “burned” who will never come back, nor negative press that could damage your brand.
  • Less monolithic / more fluid development — When engineers know that an incomplete feature can be turned off with a server configuration, this lowers the overhead of development.

This capability is typically called “feature flagging”. When an app session begins, the app makes an api call to retrieve a set of booleans or “flags” that tell you what code paths are okay for your application to take.

For example:

  • You just built a mobile client payments feature
  • You created a corresponding server side flag called “payments_v1”. It is a boolean value that can be set to either true or false
  • On the first client session, the client fetches this value. If the value is true, the client will show the feature, if it’s false, the client won’t

It is common to prepend the platform name, so usually “ios_” or “android_”, and append the date when the flag was made, so something like “_aug_2017”. So for the first version of the payments feature that is being released on iOS, you may see a flag like “ios_payments_v1_aug_2017”. Having the date helps so that teams can keep a soft timeline in the code of when different features were released. Having the platform helps so that turning the flag off will only turn it off for the appropriate platform rather than all users, if the crash or issue is only on one and not the other.

Running a growth session with the Local Globe portfolio companies in London

Tenet #3: Extend server controls of mobile features to A/B tests to maximize learning

Controlling features in only a binary way (true == show, false == hide) is great as a switch to turn things on and off for users. In many cases, though, we want to roll out different feature variants to learn what our customers want based on their usage of the feature.

We do this with A/B testing on mobile. The concept for mobile here is the same as A/B testing anywhere else — we want to run an experiment and test several different variants for the best results. So in addition to feature flags to turn client behaviors on and off, key names of the variants to display are returned by the server. Based on the variant received, your application will display the corresponding UI/UX.

This process allows us to:

  • Pick the winner — By exploring and testing the options, you can pick the best.
  • Slow-roll a risky/contentious feature (possibly to a beta group) — Sometimes we launch features that may break existing functionality in unknown ways. It may be using a new back-end, or a new bit of functionality that is largely untested. By launching to a small percent of users, you can often mitigate broader customer pain. Another option to consider is to rely on a beta group of loyal customers. I’ve seen situations where these die hard fans of the product like the exclusivity of being on early releases of your app.
  • Build a better app experience — When you get in the habit of measuring usage and navigation data in your mobile app, the natural uses often become self-evident. This allows you to optimize your app to actual customer behavior.

Extending the example of the payments feature we discussed earlier:

Run the test:

  • We want to A/B test the color of the payment button to see if there is an effect on user purchases
  • In addition to a “payments_v1” flag we now send down from the server a “payments_v1_button_color” key with variants i.e “blue”, “green”, “red”
  • Wait until you get statistically significant results from client tracking

Decide on a result:

  • Let’s say “green” performed best…
  • With the next release cycle, update the client code to always display the green button color.
  • Clean up the server code to always return “payments_v1_button_color” as “green” so older clients will display that too

Remove the A/B test when safe to do so:

  • Clean up the server code again once the ratio of your users on older clients is at a negligible percentage
  • Completely remove the “payments_v1_button_color” key
  • If you developed with backwards compatibility in mind (so that if the key isn’t sent from the server), the client falls back to displaying some default — much like the default case of a switch statement

Conclusion:

With a little work, the development of mobile apps can (and should) be nearly as flexible, safe and knowledge-producing as web-based development. I am optimistic that we may even see continuous deployment on mobile apps in the next decade.

TL;DR version

  • By starting with a weekly sprint process, we drive focus and clarity to the work we do.
  • By following with weekly or bi-weekly app deployments employing A/B test variants of the mobile client experience, we maximize our learning while minimizing quality risk.

If you’d like more analysis and insight like this, delivered monthly(ish) to your inbox, sign up to The Operator’s Manual (our newsletter) here.

--

--