A/B Testing: What Lies Beneath the Powerful Weapon

Christina Gkofa
Agile Insider
Published in
8 min readMay 1, 2020
Image: Negative Space @pexels

A/B testing has definitely gained momentum; you see it, literally, everywhere, with digital products we all use on a daily basis. Think of Zalando, Booking.com, trivago, Netflix, Amazon, LinkedIn, Facebook and many more.

Whether you are in e-commerce, desiring to boost successful checkouts or your average order value; in media, aiming to increase readership; or a travel company, heavily focused on inflating bookings; you have valid reasons to test on all the following: homepage promotions, navigation elements, checkout funnel components, email sign-up modals, recommended content and social sharing buttons, as well as search results pages.

“If you aren’t testing, you don’t know how effective your changes are. You might have correlation but no causation.” — Kyle Rush, vice president of engineering at Casper and former head of optimization at Optimizely

With A/B testing becoming the weapon to succeed, I was lucky enough to have trivago as my school. I was exposed to so many experiments, as well as continual discussions with BI analysts, fellow PMs and other colleagues to interpret those experiments, generate tons of learnings and define new courses of action that will drive the business forward.

Undoubtedly, there are lots of opportunities hidden behind this science, but also lots of surprises and mistakes you can make if you fail to do it properly.

A/B testing, defined

First thing first: definition! In essence, we are looking at a methodology that includes application of statistical hypothesis testing. You may come across it as A/B/n testing, bucket testing or split testing.

It is the process of a randomized experiment that compares two versions (A and B) of a web page, app or email, and measures the difference in performance. After you decide on your variations, you allocate one version to one group and the other version to another group. Statistical analysis is then used to determine which variation performs better for a given conversion goal.

Sounds super easy, right? Don’t judge a book by its cover.

Unlocked benefits

Image: Andre Furtado @pexels

This method has numerous advantages, as it can be used to prove any properly defined hypothesis and further enhance any quantifiable metric in your product, such as feature discoverability, retention, social interactions, etc.

“If you double the number of experiments you do per year, you are going to double your inventiveness.” — Jeff Bezos, CEO of Amazon

Below are some of the most commonly admitted benefits:

  • Growing website traffic: Experimenting with alternative blog posts or web page titles can change the number of people attracted to that hyperlinked title to reach your website.
  • Boosted conversion rate: Playing around with different positions or more suitable colors, or making the copy on your CTAs crisp can be a game-changer, resulting in users heavily drawn to your landing pages.
  • Dropped bounce rate: You can manage to retain more visitors and prevent them from bouncing, if you test around bridging the gap between what users have seen previously and what they expect to see upon landing on your webpage.
  • Reduced risks: “The most obvious way to use A/B testing is to use it to rule something out,” Rush says. “If you see that making a change could decrease conversions, don’t move forward with it.” Commitments that would have cost time and money can be prevented.
  • Better cart abandonment: E-commerce businesses see 40%–75% of customers leave their website with items in their shopping cart without paying, according to MightyCall. Certain A/B testing twists in your order page may get users to that finish line, especially the UX between checkout and entering the shipping address.
  • Improved user engagement: Testing various changes and rolling out the “winners” will, hopefully, result in a more optimized page, welcoming users to interact with it.
  • Optimized content: We all know by now that users on the web do not read! So what is the minimum copy that will speak to their hearts and make them convert?
  • Greater profit: The ultimate goal is to eventually increase your profits. Through testing, the user experience you offer is constantly improving, building tighter trust bonds to your brand, and generating loyal customers with repeated usage, eventually resulting in higher profit.

The six-ingredient recipe

Image: Georgia Maciel @pexels

Once you run this process once or twice, it becomes clear there is a certain pattern you need to always follow if you wish to be successful:

  1. Goal setting: Be very clear on what you want to achieve. A conversion-rate uplift for the homepage, whether there is need for a new feature or a potential new redesign? Very important: Make sure you have a user pain to solve.
  2. Hypothesis formulated: “Our homepage conversion is low, because the main CTA is below the fold, and our users do not scroll that far. If we place the main CTA above the fold, then more users will interact with it due to its visibility. We’ll know we are right when we see an increase on the button clicks.”
  3. Variants defined: What is the variant you will have competing with your current version? “Let’s test the main CTA above the fold.”
  4. Control and test group distribution: 50% of users are in the control group, and 50% will see the CTA above the fold.
  5. Testing: Make sure you keep monitoring the test on a daily basis while it is running.
  6. Results analysis and next steps: Depending on your traffic, for this type of change, 1–2 weeks should give you statistical significance to determine whether you have a winner or no impact at all. Based on that, you can define a short-term course of action.

“A/B testing helps you prioritize what to do in the future. If you have 20 items on your product roadmap, and you want a definitive answer as to what will move the needle, you need data. If there’s a feature that is very similar to a feature that did not work in testing, don’t go forward with it.” — Kyle Rush

To A/B test or not to A/B test?

Even though A/B testing is becoming a buzzword and a trend, you might be wondering when it is suitable to run an A/B test. You should not test everything.

The following elements should be tested:

  • Copy changes: headlines, sub-headlines, different writing styles, formatting, etc.
  • Design layout: product images, offer images, product videos, demo videos, ads, overall information hierarchy, etc.
  • Navigation: Is main navigation enough, or is a sub-navigation necessary? Also, where is that menu button?
  • Forms: radio buttons, drop-downs, check boxes, number of fields to fill, etc.
  • New technologies or major functionalities: To mitigate the risk, wrap those heavy changes around a test, so you can be sure of the impact they create before introducing them to 100% of the traffic.
  • Marketing emails: overall layout, catchy subject line, balance of visuals and text, etc.
  • Landing pages: CTAs above the fold, images, exposing the header and the navigation by default, presence of unique selling propositions (USPs), colors, etc.
  • CTAs: different copy, placement, colors and sizes, etc.
  • Social proof: reviews, testimonials, media mentions, awards, badges, certificates from experts and customers. Will adding social proof bring any value, and how many should be added?

13 tips and tricks that work

Image: Pixabay @pexels

I have accumulated certain tips and tricks that are working and can help anyone who is now starting their journey with A/B testing:

  • Set it up correctly, and know (or find someone that does) how to interpret those results.
  • Don’t compare apples to oranges.
  • When the outcome is not what you expected, be brave enough to accept it, and move on. Don’t try to turn your hypothesis around to make it fit.
  • Don’t test many changes at the same time.
  • Consider when you need to run an A/B test, and when you, rather, need to run qualitative user testing. If you want to know whether people understood a functionality, you need qualitative data. Running an A/B test and concluding, “Hey, they got it, they clicked on it,” you are in for a surprise. Just because they clicked on it does not mean they got it. Maybe they were trying to find something else, or this button was shouting for attention.
  • Have a plan and a roadmap to follow. Don’t randomly test for the sake of testing.
  • Don’t focus only on conversions: While adding new copy to your site may produce higher conversion rates, if the converted users are of lower quality, then a higher conversion rate may actually create a negative result for the business.
  • Your hypothesis is not wrong; the way you measure it is. Make sure you have those KPIs straight. If you want to improve a user’s navigation, monitoring more clicks shows you the opposite effect → the users are struggling to find what they need easily.
  • New and returning visitors don’t all boil in one bucket; their needs are different. For example, new users need more onboarding, while those returning need recommendations based on previous behaviors.
  • Avoid running before and after tests: Instead of simultaneously testing two or more versions, you test different versions for different periods of time, which is really not the same thing.
  • Give the test the time it needs: Don’t end it too soon or too late. Ensure you have a sufficient sample size and statistical significance.
  • Pause the test while you are analyzing results: An ex-colleague of mine, James Neaves, whose opinion I really value, once said: “When you are brushing your teeth, you don’t leave the tap running. Do the same with an A/B test.”
  • Stop changing “settings” in the middle of the test.

“The goal of a test is to get a learning, not a lift. With enough learnings, you can get the real lift .” — Dr. Flint McGlaughlin

--

--

Christina Gkofa
Agile Insider

Product addict in the tech industry since 2014 (OLX, Metro Markets, StepStone, trivago). Respect great UX and retention. Cuisine and wine explorer, pug lover