A/B testing experience at Doctrine

William Duplenne
Inside Doctrine
Published in
5 min readJan 4, 2022

At Doctrine, we aim to deliver the best user experience possible for our users. In order to achieve that, we can rely on our cores values 📚

One of them is: Release Early, Release Often, and Listen to your customers.

This value guides us when we are developing our features and there is a lot of way to work with it, some examples:

  • Doing customers interviews to showcase prototypes and new coming features in order to gather feedback
  • Releasing incremental features in order to analyze and gather feedbacks incrementally through interviews or analytics

And another example that we will be focusing on for that article:

  • Run A/B testing experiments in order to compare versions that will fit with our customers

Historically, we already had a framework in order to run A/B tests. But this framework had some limitations:

  • The way we create allocation (ie. traffic repartition across different versions) was not ideal and properly working
  • The system was not robust, we could delete sensitive information by running the wrong command
  • We could not create feature flagging (ie. switching between two versions easily) or canary release (ie. releasing incrementally a feature) easily

That’s why we took the opportunity to rebuild it 🙌

We first started to elaborate on our needs. They are quite simple but here they are:

  • Be able to run A/B testing easily
  • Be able to run them client-side and server-side
  • Be able to analytically interpret them
  • Be able to run them on all our services (ie. we have our web app, but also machine learning models or Elasticsearch indices to test for example)
  • Be able to run feature flags
  • Be able to run canary releases

We first looked at what was available on the market, as guessed, there are a lot of services that allow you to perform A/B testing easily such as Optimizely, AB Tasty, or Convert.

If those services are answering our needs, they are also bringing to us a lot of features that we don’t need for our use cases (for example analytics, since we already have our analytics provider we don’t want another). And by adding a lot of features, they are also much more expensive.

All of that made us think about developing our own A/B testing framework!

Where to start?

The core functionality of such a service is the algorithm. How can we fairly distribute traffic across multiple variations?

To perform that, we will use an algorithm based on a hash function. For the hash function, we will use MurmurHash that is well-known as a good hash function for distribution. We also ran some tests using Artillery to validate the distribution and we were quite happy with the results!

MurmurHash is deterministic, which means that it will always return the same result for the same entry. So, we need to use it with parameters that will represent our current experiment for a given user.

In order to identify that, we pass as an input the name of the experiment and a unique identifier that is linked to a visitor (ie. a logged user or a anonymous user):

murmurHash(`${visitorId}-${experimentName}`)

By doing that, the hash function will return a value that is always the same for this given input.

Now, we will have to assign a number to know which version the user will see. We will assign a number between 0 and 10000. So our final algorithm will be:

murmurHash(`${visitorId}-${experimentName}`) % 10000

Thanks to that value, we will be capable to determine which version to serve to the user. Let’s imagine a A/B testing with 100% of traffic and a 50%/50% distribution :

  • Between 0 and 4999, the user will be allocated to the first version
  • Between 5000 and 9999, the user will be allocated the second one
The repartition of the distribution

So far we have for this user:

If allocation equals 6465, the user will be allocated to the second version of the experiment.

Since it’s deterministic, it will always be the same version for the whole test.

Moreover, if for some reasons you update the distribution, for example for a 80%/20%, the allocation’s user will always be 6465 but will now be allocated to the first version.

That’s pretty all for that part, this implementation will allow us to create A/B testing, feature flags and canary releases easily but we now need a way to provide that to our services 👀

As a service

The service is quite simple, it can be abstracted as two main components:

  • The allocation API
  • The experiment API

The allocation API is responsible for giving an allocation for a given user id and an experiment.

The experiment API acts as a CRUD API for defining our experiments.

So thanks to a HTTP call, we can retrieve allocations for a given user and experiments:

A CURL request to the service

Since it’s a service, it can be called by all our services that need it. An example of a possible implementation:

An example of how the service can be used

The front-end is responsible for fetching allocations for given user and experiments, doing the logic according to the allocation, and propagating those allocations if needed to other services that will return different data based of them!

The allocation data will also be passed down to our analytics provider in order to be able to know in which version the user has been to for a given analytics event!

Wrap up

Thanks to that implementation, our Product & Tech team can now create and run experiments across all our services easily to deliver fast & beautiful experience for our customers!

We recently improved our search experience by running an A/B test on the main filters displayed to our users and we’re currently A/B testing some emails changes to improve the current experience 🚀

An example of a current A/B testing at Doctrine

If you enjoyed that one, feel free to have a look at our open positions to be part of the journey 🙌

--

--