7 Reasons Most A/B Tests Fail

In almost every meeting about interface design, there is a point at which different opinions come up and someone suggests to run an A/B test. After that, whenever there is a slight feeling of disagreement, someone recommends that “We should test that as well.” However, in reality, A/B testing fails 9 out of 10 times. This is why:

Picture by Ryan McGuire

1. Underestimated Effort

An A/B or split test is not something you can do easily, even when some tools promise this. Even if you can use a tool like Optimizely, it is a lot of work to do it right. You need to think deeply about your hypothesis, build different versions of your website, deliver it to your visitors, have tracking in place, and you need proper time planning and so on. Moreover, A/B testing is not part of a project, it is a project.

2. Too Many Versions

Normally, you start with two versions in a test. However, when you have not done it, you think, “we could test that as well. Maybe this headline with that button style will give me a different result.” This results in way too many unnecessary tests, which take too long and are difficult to analyze.

3. Not Enough Traffic

Typically, established companies don’t struggle with too little traffic. Let’s say your current conversion rate is 5% and you think your change will result in an improvement of 10%, you would need 1,000 conversions per test version. Thus, you still need at least 40,000 participants (visitors) on the page you want to test.

4. Too Little Time

When you have enough traffic, it could be that your test takes too long. Maybe there are certain deadlines you have to meet. If the page you want to A/B test has 400 visitors per day and you include all in your test, it will take 100 days. To improve traffic, you could always purchase some. However, do these visitors interact the same way as your target group does?

5. Results Are Not Significant

You made a test with 4,000 visitors with an average conversion rate of 5%. Version A performs 25% better than Version B. Hurray! However, if you use a significance calculator, you notice that it is statistically not significant.

6. Using Serial Tests

Setting up a clean A/B testing environment can be difficult from a technical perspective. So, why not test it in series: Version A for a week and version B for a week? Because there are changes that you cannot take into account. Maybe there was a blog post about your business that brought a broader audience to your website that interacted differently, or the weather was different. You just cannot know.

7. Poor Tracking

In the many years as a UX Designer, driven by numbers, I checked many Google Analytics accounts. Only a few could answer all the questions I had about the user behavior. Proper tracking is essential. If you can’t track your conversion rates now, how should you be able to do it in a A/B test?


«A/B-Testing is not part of a project, it is a project.» 
Tweet this

This list should illustrate how hard it is to run an A/B test. I have helped in many A/B and user tests, if you need any support with yours let me know.