Supercharge Your Product With A/B Testing

Use data-driven testing techniques to create an application or website that your users can’t live without

Todd Runham
Gousto Engineering & Data
5 min readAug 17, 2020

--

When building a tech product, there are a lot of unknowns you have to deal with.

Is the concept viable? Do I have the right people involved? Am I introducing the right features?

We’re going to be focusing on the last question, and how to answer it using various forms of A/B testing. This will then enable you to create product growth that is guided by constant user feedback. It also means you can design and develop based on the behaviour of users, rather than attempting to predict their expectations.

What is A/B testing?

A/B testing is a straightforward concept. You take a section of your product, and you modify it. It may be an individual element like a button or a larger entity such as an entire page. You then send a portion of your traffic (usually 50%) to the new version of this section, known as the variant. The rest of the traffic will continue to see the original, known as the control.

Throughout a predetermined period of time, you will monitor engagement with both versions. If the modification generates a positive reaction, you know you have introduced a successful change.

How can it help your product?

Improving a product at a tech company receives a large amount of focus, yet is often done incorrectly.

A considerable amount of time and effort is spent on pushing as many features out the door as possible. Usually, because of the misconception that more features will always mean more users. However, this is simply not true.

Users want to see relevant changes introduced

Polluting the UI with too many components that are rarely used is counterproductive. Instead, new changes should be treated as experiments that can have the following outcomes:

  • If the feedback does not indicate any value at all — there is no need for it.
  • If the feedback is inconclusive — it’s worth investigating how to make it more appealing to the end-user.
  • If the feedback is positive — finalise the changes and move on to the next idea. Alternatively, you could improve upon it even further to maximise success.

The key point here — listen to your users and learn from the metrics they create

When developing a new experiment, it’s best to build a minimum viable product (MVP). An MVP in this context is where you build enough of the feature to satisfy basic requirements. If users do not respond well to the variant, at least you haven’t excessively leveraged development cost. As a consequence, your development cycle time will not only be a lot faster but more efficient as well, as waste is being eliminated.

So we’ve established how you should introduce alterations to your product. Now we need to talk about metrics, and this is where A/B testing comes into play.

With A/B testing, you have a point of reference by retaining the original version. You can compare metrics sourced from this to metrics sourced from the new variant. They can be anything from traffic to conversion rates, or on a more granular level — button clicks.

These metrics will then be a great indication of which path should be taken, as they will exhibit better engagement in either the “A” path or the “B” path.

For example — Increasing the size of a CTA that links to a page in your application that receives little to no traffic. By comparing the page views of the users across both the variant and control buckets, you will be able to determine if increasing the visibility of the CTA has helped increase visits to the unfrequented page or not.

Advanced A/B testing

Once you have the hang of two-track experimentation, you can start looking into multi-track. More commonly known as multivariate or A/B/n testing.

Multivariate testing is more difficult as you have various combinations of changes you need to analyse and keep track of.

On the other hand, it does allow you to run concurrent experiments in the same area of your product, such as three variations of a promotion. This can further decrease cycle time and reduce waste as you don’t have to run these tests sequentially. It also gives you a lot of space to be creative.

Keep the amount of variations low, as too many means you’ll have to split your traffic too thinly leading to longer lasting experiments or unreliable results

There are a few ways to implement these methods. Larger companies may build an in-house solution. Google offers a free platform that has gained some traction recently and is a great choice for startups. A popular alternative is Optimizely, which is an excellent option for more established businesses that are willing to pay a third party for this functionality.

Whichever solution you lean towards, being able to visualise which improvements users favour means you can be confident that you are in fact introducing the right features.

Some best practices to end with

Stabilise — Remember that if the experiment is successful, it should be evolved into a full, stable feature. This is often forgotten, and before you know it, you’ll not only have a codebase that developers struggle to work with but an incomplete product as well.

Choose when to utilise A/B testing carefully — These tests need a reasonable amount of data to produce a reliable conclusion. If you are just launching your product, you may not be able to source the required amount of data, and test results will be unreliable.

Be wary of overlap — Running too many experiments in the same space can devalue your data. It’s imperative you know what is creating success, and what isn’t. While multivariate tests can alleviate this, it’s best to remain focused and keep colocation to a minimum.

No amendments — You will want to wait until the end of the experiment before making amendments. This is so your final data will represent a fair comparison.

--

--

Todd Runham
Gousto Engineering & Data

London based Senior Software Engineer @goustocooking. Working mostly with JS, React and Agile processes.