A/B testing for effective design feedback

Pineapple
The Pineapple Slice
6 min readOct 21, 2020

A lot of design processes are followed when creating a product. These design processes however don’t result in a hard and fast decision every single time. Many decisions are taken based on the designer’s intuition and interpretation of the information available. And so naturally, there are scenarios when designers find themselves in a split between different decisions. This is when Split Testing, or A/B Testing as it is commonly referred to as comes into the picture.

What is A/B testing?

A/B testing is an evaluation experiment performed to find out which of the 2 versions of a webpage, app, ad, or email converts better. Variables like font, button color, micro-copy are changed and both versions are shown to separate halves of the same target audience. Based on the statistics, a study is then done to understand how each version affects the user behavior, and the version closer to the company goals is adopted.

1. Collecting Data

Before running any A/B test, collecting data is very important as it helps you to find out what parts of the websites are performing poorly and need optimization. The collected data like button clicks can act as benchmarks which you can compare with the results of the A/B test.

2. Defining the goals

This means defining what conversion change are you expecting. It can be an increase in time spent on the page, more clicks, etc. Defining goals help you to stay on track.

3. Generating a hypothesis

A/B testing is only a tool to evaluate decisions. It is important to follow due process and come up with possible solutions for your set goals and narrowing them down with your team. You can then put forth a possible explanation as a hypothesis as to why each solution would work. This way you make sure you are backed up with reasoning instead of randomly exploring infinite ideas.

4. Creating A & B versions

This should be done by making versions for one goal at a time, starting from the most prioritized one. Depending on the goals, A and B versions are designed by carefully making variations in one or more elements like layout, content, navigation, CTAs, etc.

5. Testing

This is where you use tools to deploy your created A/B versions live. Every action taken by the user is measured.

6. Analyzing results

All of the data measured is studied to understand the changes, if any, in both versions. Based on this, the hypothesis made is either concluded true or rejected. When true, the hypothesis can lead you deeper into understanding a particular aspect of user behavior. And when disapproved at an early stage, it can save a lot of time and resources.

Let’s take a look at one such hypothesis -

Netflix non-member home page

Having run 10s of 1000s of tests, Netflix is one of the companies that believe greatly in the power of A/B testing. Right from the CTA buttons, thumbnails for the titles, to pricing options. Navin Iyengar, a product designer at Netflix, in his talk “Design like a Scientist” spoke about one such test done at Netflix, which I’ll explain below.

Netflix conducted a survey on their potential customers and asked them this question — “What one thing would you like to know more about before signing up for Netflix?”

46% of them answered that they wanted to know what content is available on the platform. This resulted in the following hypothesis —

By showing people what our catalog is before they sign up for Netflix, it will make them more likely to sign up.

Now Netflix started their qualitative research for this hypothesis and started designing multiple versions of the non-member home page of Netflix with objectives like showing a list of genres, showing the titles within a genre, etc. On seeing these prototypes people said that this was exactly what they needed to know to decide whether or not to sign up for Netflix. But as we all know —

Don’t listen to your users, observe them.

When they observed their users about how they are reacting to these prototypes, they noticed that the users are getting bogged down. They were looking for specific pieces of content that they wanted to watch for which they would sign up for Netflix. This might not be the best non-member customer experience. To validate this idea, Netflix performed 5 different A/B tests. They tested their existing version which did not allow users to explore the entire catalog in detail, with versions that did allow them. And as it turned out, their existing version outperformed the variations in all 5 tests.

They decided to showcase a wall of content in the background of the non-member home page that would be personalized and tailored to the region they are in and the popularity of any content.

Data-driven decisions

The reason Netflix performs so many of such tests is to ensure that there are no biases in any of their approach. When working with multiple people in the team and other stakeholders, there could be a reasonable clash of opinions. A/B testing would then help to figure out which decision converts better than the other and why so. The decision would be driven by data. This ensures that the design is under control and not taking any erratic direction of any individual intuition.

If you’re not measuring it, you’re not managing it.

One thing that the digital world has surely helped in — is measuring data. Today we have the ability to acquire and measure data of any segment of a product or a service. With A/B testing, we can compare these various metrics to understand the users deeper and optimize various aspects of the design. It is also important to not just focus on a single metric for the test and analyze and study the changes in other metrics as well to make sure that we don’t miss out on the nuances in the user’s behavior. A common example of a pitfall of ignoring other metrics would be interpreting the data as customer loyalty when in reality — it was the inability of the customer to figure out how he could cancel his subscription.

Closing thoughts

So be it to improve user engagement, increase conversion rate, decrease bounce rate, reduce card abandonment, or any other specific goal, A/B testing can prove to be a very powerful tool. It helps in quickly validating prototypes and optimize versions by setting the winner of a test as a benchmark for the next iteration. It builds an experimentation culture in the organization, and as Navin Iyengar says, to which we at Pineapple studio truly believe in -

Thinking of product development as a series of experiments leads to stronger designs.

If you like what you read, do clap for us and check out the articles recommended by us below :)

This is how Netflix, Snapchat, and Microsoft break UX Design principles

The future of Social Media experience

A quick guide to an effective design handoff

Want to say hi? Drop us a line on hello@pineapple.design

Check out our work

--

--

Pineapple
The Pineapple Slice

We design holistic digital experiences that enrich human lives and help businesses grow. Let’s connect at hello@pineapple.design