Chitra Laras
Bootcamp
Published in
5 min readJan 15, 2023

--

A/B test experimentation as a way to validate our ideas is not a new thing. There are many companies who embed this in their product DNAs and use it as a way to build their products.

As a product person who started my product journey at Booking.com — I am a believer that experimentation is one effective way to validate our hypothesis fast. It is less about hitting a metric. It is about understanding if what we believe is true is, in fact, true. It is also a way to check if there are any other negative impacts that come with our ideas.

There are many good articles about when and how we should do experimentation so I will not add another pile on that topic. Instead, I will share some learnings from one of my teams who did a high velocity experimentation on our landing pages.

Copy change is impactful

When the team was started, some of the developers had never done experimentation before. They were very skeptical on how a small change can potentially change user behaviour. This completely changed their mind when our first win was changing a header copy by adding the word “cheap”. Over time, we learn how some of our hypotheses can be translated into better copies within our product.

This line of testing has the highest success rate compared to any other changes, mainly because it is easy to implement — so we can have a much higher velocity in a year. It might not be the one that gives 50% uplift but small things add up when it comes on a regular basis.

Search engine (SEO) rank and user experience

When you work on SEO landing pages, there are 2 goals: be number one on search results and convert all the visiting users. As a result, every time we do a change, we have two users: algorithm and the real (human) users.

One hypothesis was to visualise the information on the pages. Since most people are a lazy reader, visualising the information with images and short text would help them get the information they need to make a purchase decision.

At the same time, this information needs to appear on search results so users can understand what the page is about. We call this information a snippet and it is mainly decided by Google algorithm.

Winning variant: information is displayed with icons and better layout.
Snippet needs to appear on search result for rank and click rate.

Our first few tests were proven loved by our users and moved our metric successfully. However, it did not work well for our rank. It removed the SEO snippet and dropped our rank significantly. It took us a few more iterations to make sure both metrics were good.

It was one of our trickiest tests as users and search algorithm were not aligned. Eventually, we managed to find a way to build a visual that pleased users and have it as a table that is readable by search engine algorithm at the same time.

Since we did not run the ranking impact as an independent experiment, we need to monitor the ranking impact very closely after each release and act fast accordingly.

Same ideas but different results

As an international company, we need to think global and implement locally. We test some hypotheses across different languages and devices as much as we can. Often, we find opposite results for the exact same hypotheses and ideas.

What to do when a test wins in one device but not another? On one side, having a different UI between mobile and desktop when your tech stack is a responsive web feels like adding tech debt. On the other hand, users seem to only interact with it on the mobile web.

After long discussion and analysis, we came up with some thinking guidelines:

  • Do we have a hypothesis why there is a difference?
  • Was there any significant negative user impact for one of the segments or devices? If there are no significant differences, we can roll it out for everyone and all devices. Remember not to slice the segment too small as the result might not be significant for these groups.
  • How much more tech, design, and operation complexity will we add for any future changes?

Brand and marketing campaign: to test or not to test

The interesting yet challenging part about working on the landing pages is it is your shop’s front door.

We want to make sure we leave the best first impression by having the correct brand image, promoting the right things, or showcasing our propositions.

While the testing variables (eg. traffic, impact) for product features are much more straightforward, it’s a little bit trickier to test ideas that are seasonal, such as Christmas promotion or time related campaigns.

If we only want to have a discount banner for 6 weeks or change our header image during the holiday season, should we run A/B test on this?

Example of promotional banners for a specific time frame.

I must admit — we have not had any universal approach for this yet. We might never find one and will always see these cases on an ad-hoc basis to assess if we want to run an experiment or not. However, there are few consideration we adapt in this scenario:

  • Lower the confidence level. Instead of 95% or 99%, will 80% be acceptable?
  • How high is the traffic? Is it high enough for us to notice any differences, instead of waiting for 1–2 weeks?
  • How big is the change? Is it a small line below the fold or is it a massive banner on top of the page?
  • How many creative materials do we have and how long do we want it on our site? If it is for 3 months, it can be useful to test several creatives in the first two weeks. Imagine one of the creatives is much better compared to the rest, the impact of testing in the first weeks will pay off.

--

--

Chitra Laras
Bootcamp
Writer for

An active person (esp. weightlifting), a wife, a mother, a mindfulness enthusiast, and a product manager by profession.