Android and iOS config-driven experiments that don’t require releases (Droidcon London talk)

At ASOS there’s an ever increasing demand to run experiments at scale in our mobile apps. Spotting an opportunity to improve current processes, where we had to manually implement and release each experiment individually, we built a custom “Url Injection Framework” which makes it possible to implement configuration-driven experiments that can modify any API call or network request without requiring app changes and releases.

Marco Bellinaso
Nov 23 · 7 min read

On Oct 28-29 a group of ASOSers attended Droidcon London 2021 (refer to this other blog post to see how much we enjoyed it), and and I delivered a talk about this custom framework we developed in-house to do AB Testing at scale in the iOS and Android apps, so that we don’t have to release a new build for every new experiment. This article explains the core concept, but if this sounds interesting we invite you to watch the full recording available on the Droidcon website.

A quick recap about AB Testing

Doing AB Testing means splitting your customers in 2+ groups, assigning each group a different version of something (eg: the text/placement/style of a button is different for group A and group B), measuring which one converts better in terms of engagement/revenues (different metrics are best suited for different businesses or types of app), and finally activating the “winning” variant for everybody. Doing this means making decisions that are driven by data, rather than by personal opinions and bias (no one is really infallible unfortunately, science tends to work better 🧪).

The challenge

In the Apps team we’re constantly getting requests from other departments to run new tests on various screens. The problem is that with fully client-side testing (which is probably the most common type of tests, because of the flexibility it offers) we’d have to implement every test, duplicate it for all apps we want to run it on, make a new build and release it. (On web this is not as problematic, because of the different nature of the distribution model of web vs apps in the App Stores.) 😰

While in some cases there isn’t any way around this (say that you need to make a change to some UX flow, such as the screen you navigate to after clicking the “Add to bag” button for example), the more the UI just adapts to data returned from APIs, the easier it will be to implement a test by having the APIs return different responses according to some input parameter, and have the UI dynamically change according to that. So for example, a Navigation API can return json that represents a navigation tree that’s different from the standard one (either in terms of the actual structure, or in terms of styling of the individual nodes) when it receives a parameter like tst-var=BEAUTYv2.

The Navigation API returns a different category tree (with different nodes, or different styling attributes at the individual node-level, to vary the colour/image/layout or text) according to the tst-var param, and the app builds the UI dynamically.

If your API has this capability (which of course is additional work on its own, but at least is separate from the client-apps, and done once for all clients), the only thing left to the client app is bucketing users into different variants (something that you can do with any AB Testing tool, such as Optimizely, Firebase, etc…. The idea presented here is really agnostic to that), and changing the URL for the API request according to the variant that was assigned to the user. This greatly simplifies the implementation of an experiment (because the logic to change the data is centralised on the backend, and not in all the clients), however it still needs new releases for every new experiment.

The idea

Since the only thing the app needs to do is to dynamically append a querystring parameter with a certain value, before calling the API the test depends on, we only need to make the following 2 things remotely configurable to avoid any client-side change for new experiments:

1) Some sort of remote manifest file that lists the active experiments, and that defines rules to identify which API/network requests each experiment is applicable to. This file can be static and hosted on a CDN, so that it’s super quick to download. Here’s a sample file, which defines a single experiment applied to API calls with a bare URL equal to “https://api.asos.com/content/nav, and with a querystring parameter named “country” that is equal to either “gb” or “ie”.

Note that the experiment above has also a featureKey parameter, which is the name of the experiment as defined in the external AB Testing service. The app interrogates this service to bucket the user and get a variant, when it’s about make a network request that matches the rules defined in the manifest.

2) Each experiment will have 2+ variants defined in the external AB Testing service, and each variant will have metadata associated to it that describes how the API/network request needs to be dynamically modified (eg: by adding/removing/changing querystring parameters in most cases…but potentially doing the same for headers or the request’s body). Here’s the example for a variant of the experiment named “navigation-beauty-banner”, which instructs the app to add a querystring parameter named “tst-var” with a value of “BEAUTYv2”:

So what the app does is to intercept all network calls, understand if there’s an experiment that applies to it, and if so get a variant from the AB Testing service and modify the request as described by the variant’s metadata. The diagram below represents the flow visually:

The nice thing about this is that if your app is structured like above, all the networking code is centralised in a dedicated class/layer, used by all the app’s features. By introducing the logic in that one place you’re effectively adding AB testing support to all the API requests with very little effort. 🔥

Caveats: Of course, as mentioned before, this is all true only with the assumption that your APIs are able to change aspects of their response according to some “test querystring parameter”, and that your UI is highly data driven rather than being hardcoded. The first thing can be implemented once there’s actually the need for a certain API to support that capability — the API team would do it, and when ready the client app will be able to make use of it with no client-side change, which is what we want. As for the second point, that must be part of your original requirements for the apps’ implementation, but at least for the ASOS app that has always been the case for many screens and functionalities (homepage, product listing pages, product details pages, the category tree screen and many more things) years before we thought about adding AB Testing support 😎.

Also note that this approach not only works for API requests, but effectively for any network requests, so you might have a test that changes the url to images or other remote assets for example (provided that you add support for rules that change the path/url beyond just adding/modifying querystring parameters).

The client-side implementation (Android)

The implementation on the client-app is quite straightforward. In the Android ASOS app we use the open-source OkHttp library in the networking layer, which allows you to add “interceptors” that can modify the request before it’s executed.

We won’t go into too many details here because the official documentation does a great job, but here’s an excerpt to give you an idea of how the chain of interceptors is extended with a new custom one:

And here’s what the custom interceptor looks like:

On the iOS app we use Alamofire, which has a very similar feature. You can read about its RequestInterceptor here.

Is it working for us?

You bet! 🥳

We are already expanding the Url Injection Framework further with new rules to match requests and rules to modify them, and also want to create something that validates that config JSON automatically. But there is no doubt that even in its current state the framework has been massively beneficial, and it will be more and more so as we launch new experiments and save more time! 🔝

Launching new tests on the homepage, navigation tree, product pages and more, is now a matter of 1–2 days for the iOS/Android teams, and that’s only because we need to do some work to create the configuration in the AB Testing tool and do some testing to make sure it works as expected — but nothing more, and more importantly, no releases.

Sounds interesting? Find out more!

This article is just an excerpt of a longer presentation we did at Droidcon. Head to the Droidcon website to watch the recording of the talk and read all the slides, which contain more details.

ASOS IS HIRING! 👩‍💻🧑‍💻

Truth to be told, what we described here and on the full talk is a reduced and simplified version of the actual framework we developed for iOS and Android. And this is also just one of the many cool things we do at ASOS. If you want to see more, why not joining us? Head to the ASOS Career site to see who we’re looking for, read Ed’s post to get a feel of our hiring process and what the day-to-day work is like, and apply to know more! 💻

The ASOS Tech Blog

A collective effort from ASOS's Tech Team, driven and…

The ASOS Tech Blog

A collective effort from ASOS's Tech Team, driven and directed by our writers. Learn about our engineering, our culture, and anything else that's on our mind.

Marco Bellinaso

Written by

Software Architect @ASOS.com (and iOS / full-stack dev for fun)

The ASOS Tech Blog

A collective effort from ASOS's Tech Team, driven and directed by our writers. Learn about our engineering, our culture, and anything else that's on our mind.