The Anti-fail A/B Testing Cheatsheet

Jean-Baptiste Alarcon
Marketing And Growth Hacking
7 min readSep 12, 2016

A/B Testing is an extremely powerful tool for ANY marketer. It allows you to make data-driven decisions and know exactly what your visitors want.

Unfortunately, a staggering number of people doing A/B Tests get fake results.

I don’t want you to be like them, so I made you this cheatsheet. Keep it in your back pocket and never fail at A/B Testing again!

Notes:

  • This cheatsheet also exists in PDF. Get it here (direct link to the file)
  • I included articles as additional read on some of the 18 tips, they are not required but are there if you want to dig deeper :)

1. Start simple

Get quick wins, rally your colleagues, learn and tweak your process then take things up a notch.

Examples:

  • Test the copy on your offers, product and landing pages (make it focus on benefits not features, be simple, concise)
  • Remove distractions on key pages and in your funnel.

2. Always have a hypothesis

Have a hypothesis based on data, with an underlying theory for every test.

You can use this format: By {making this change}, {KPI A, B} will improve {slightly / noticeably / greatly} because {reasons (data, theory, …)}.

Note: Craig Sullivan has another (better than this one in hindsight) way of formulating an hypothesis, check it out here.

3. Have a rigorous process

Have a rigorous process, with clear, scalable and repeatable steps.

  1. Analyze data
    2. Formulate hypothesis
    3. Prioritize test ideas
    4. Launch test
    5. Learn from results
    6. Communicate learnings and implications with your team / executives / IT 7. Repeat

Note: Not having a strict process is one of the mistakes most beginners make.

4. Have a clear roadmap

Define what success looks like, where you’re going, optimize for metrics aligned with your business goals and focus on what has the most impact.

In your roadmap you should have:

  • Business goals: the reasons you have a website. Be concise, simple, realistic.
  • Website goals: how will you achieve these business goals through your site.
  • What are your priorities? Find your most popular pages. Which ones have the highest potential for improvement?
  • Spec out your conversion funnel step by step. Where are the friction points?
  • Key metrics: how will you measure success?
  • A North Star: What’s the one metric — correlated with Customer satisfaction, that if you focus exclusively your efforts on will guarantee your success?

5. Prioritize your tests

Remove ego and focus on what matters by prioritizing objectively your tests ideas.

Each test will be rated on 3 criteria:

  1. Potential gain (../10): How much room for improvement is there on this(these) page(s)?
  2. Importance (../10): How valuable is the traffic on this(these) page(s)?
  3. Ease of implementation (../10): How easy will this test be to implement on your site?

Note: Having trouble determining where your focus should be? Use Brian Balfour question:

“What is the highest-impact thing I can work on right now given my limited resources, whether that’s people, time, or money?”

6. Optimize for the right KPIs

There are two types of conversions: micro and macro. Make sure you track both but optimize for macro conversions.

A micro conversion is a step (click, social share, newsletter subscription, add to cart, …) on the path leading to a macro conversion, which is an outcome that impacts your bottom-line (check out, free trial, …), in other words, the main conversion goals of your website.

7. Don’t ignore small gains

If your website is good you usually won’t have big lifts. But don’t dismiss small gains!

If you have a 5% improvement every month, at the end of the year it’ll result in a 80% increase (in conversions)!

8. Don’t stop your test too early

If you stop your tests too early, you’ll get invalid results. You don’t make business decisions based on false data.

You can stop your A/B Test when:

  1. You have at least 200–300 conversions on each variations.
  2. Your sample is representative of your overall audience in proportions and compositions.
  3. You let your tests run for at least 2–3 weeks (if you need to prolong your test, do it by a minimum of a full week).
  4. You have a significance level of at least 95%.
  5. Your conversion rate difference is stable (look the graph on your tool to see this).

Note: Make sure you’re not stopping your tests too early with my article and by using this sample size calculator from the awesome @EvMill (oh, if you’re not afraid with a little math, his article on “How not to run an a/b test” is a GREAT read).

9. Always be testing

Every day spent without an experiment running is wasted.

Considering the time required to run a test properly, don’t lose any. Always have a test running.

10. Don’t test for too long

Don’t let your tests run for too long, you’d end up with skewed results.

The cookies used by your A/B Testing solution have a limited validity period. You’d end up with people exposed to your experiments several times.

Make sure you check the cookies before launching your test.

11. Browser/device compatibility

Test that your experiment works across devices and browsers.

Don’t skip nor fast-track this, your A/B Test success depends on your variation working properly.

12. Be wary of false positives

Each test has a chance to find a winning variation even though there are none.

Don’t repeatedly test the same element against the winner of its previous test as you’ll end up with a false positive down the line.

I call this ”cascade testing“.

Note: If you want more info on this, I covered this in my article on misinterpreting a/b tests results.

13. Check your segments

Always segment your a/b tests results.

A test could be losing overall but be a win on specific segments (don’t forget it should be statistically valid too).

14. Too many variables

Don’t test too many variables at once on a variation as you won’t know which moved the needle in the right (or wrong) direction.

If you still do it, make sure to retroactively test each variable alone after to see exactly their individual impact on your conversion rate.

15. Don’t trust your guts (nor your brain)

We’re influenced by a long list of cognitive biases. AND our ego sneaks in most of our decisions.

  • Focus on impact and learnings, not ideas and activity.
  • Past results don’t influence future probabilities.
  • Don’t take anything for granted, challenge every idea.
  • Don’t test to confirm your personal beliefs, prioritize objectively.
  • Don’t jump to conclusions with incomplete data.

Note: If cognitive biases and a/b testing is a combination that intrigues you, this article might cure your curiosity.

16. Cross-check your data

ALWAYS cross-check your data / KPIs / results with your web analytics tool.

If your test data is wrong, you can do everything else perfectly, you’ll still end up with false data.

17. Don’t ignore the flicker effect

Check that people don’t catch a glimpse of both the control and variation when landing on your page (Flicker effect).

If it is noticeable, here are possible reasons:

  • Your website is slow to load (it’s bad both for UX and SEO by the way).
  • Too many scripts load before your A/B Testing tool’s.
  • Something in your test is in conflict with / disables the tool’s script.
  • You didn’t put the script in the of your page.

18. Multiple tests at the same time

If run several tests in parallel, make sure your traffic is evenly and randomly distributed.

AND that you have the resources necessary to properly setup and maximize learning for every tests.

Happy testing!

Armed with these 18 tips, you’ll kick serious butts with your a/b tests! Data-driven marketing is the way forward, but if you’re doing it wrong, it’s more fake-data-driven marketing…

Test responsibly, test rigorously. Hit me up if you have any questions!

--

--

Jean-Baptiste Alarcon
Marketing And Growth Hacking

Growth Marketer @kameleoonrocks. Hit me up on Twitter @AlarconJB.