8 Top Mistakes Marketers Make with Website A/B Testing

Sitback Solutions
Sitback
Published in
5 min readOct 26, 2021

Website A/B testing — the process of comparing two page variations against each other to determine the more effective alternative — can be an insightful tool when it comes to site optimisation and user experience.

However, if you’re not getting the results you expected, you could be making one of these top eight mistakes:

Mistake #1: Not testing with enough traffic

Without enough traffic coming to your website, you can’t prove the statistical significance of any A/B tests you run.

If your site receives fewer than 1000 unique visitors per week or 5–10 conversions per week, you may be better served by investing in traffic generation than A/B testing.

Mistake #2: Not defining a hypothesis

Tracking and measuring the results of your A/B split test can be difficult if you don’t take the time to develop a clear hypothesis from the start. With A/B testing, you always need something to measure against. A few sample hypotheses to get you thinking include:

  • Images and Graphics: Will different designs, pictures, or colours produce more conversions?
  • Headline and Copy: Will short copy perform better than long copy? Will using bullet point lists instead of paragraphs increase engagement?
  • Call to Action (CTA) Placement: Will changing the location of a CTA button or link on a page result in better performance?
  • Sign-Up or Purchase Form: Will a different form design, number of fields, or number of steps lead to more completions?

Taking a detailed look at your page analytics and understanding where the problems are can help you to define the right hypothesis to be testing. Differing opinions between team members can also help. Who’s actually right? Let your A/B testing decide!

Mistake #3: Not defining multiple hypotheses

Don’t lock yourself into a single hypothesis. Instead, you may need to explore multiple hypotheses to thoroughly explain and diagnose a problem.

As an example, take your e-commerce check-out form. You can start by defining and testing hypotheses about the form itself. Should it have fewer fields? Should you spread the form across multiple pages, or require shoppers to make an account before purchasing?

But at the same time, if all you think about is the form itself, you may not notice problems with your products’ value proposition, your copy, or the calls-to-action (CTAs) you use to drive purchase. By defining multiple possible hypotheses that take these different types of variables into consideration, you let your data dictate where your focus should be.

Mistake #4: Not testing long enough (or testing too long)

Testing conducted over too short a time period will not produce comprehensive or reliable results. Don’t cut the testing short, even if you feel you have a certain result in the first few days.

At the same time, testing too long risks polluting the data you’ve captured. Be sure to end your tests as soon as statistically significant winners are identified.

To determine the correct testing duration, you will need to correctly weigh up factors such as your existing traffic, existing conversion rate and expected improvement. Some sources recommend a testing period of at least two weeks. However, if your site traffic is low, that may not be enough time to secure enough visits. Calculators such as this one from VWO can help you determine whether you’ve reached statistical significance.

Mistake #5: Not leaving a control variant in place

As you start to see improvements in your engagement and conversion rates, it can be tempting to race ahead, implement more changes, and forget about utilising control variants.

However, not retaining control variants will see you lose your capacity to keep measuring the impact of all changes accurately.

Imagine that you want to test two new versions of an interface element. However, if you test both against each other, both variants can be considered ‘new’. Without the existing version (i.e. the control), how will you know whether the new variant or the original is better? It could be that not changing the element was the better option, but you won’t know it if you don’t include it in the test.

Mistake #6: Testing something insignificant

The elements you choose to build A/B tests around should play a significant enough role in your conversion process that making changes could have a substantive impact on your performance.

Using a different shade of blue for your button, for example, isn’t likely to have as much of an impact as changing where that button is placed in the overall page layout (unless you’re Google, and you have sufficient traffic for tiny changes to pay off).

If you suspect that what you’re testing isn’t significant, pick your battles more carefully. Review your analytics and ascertain a more prominent element to test instead.

Mistake #7: Changing too many variables at once

With website A/B testing, it’s important to keep a clear focus on changing one thing at a time.

When you change too many elements at once — even if you keep a control variant in place — it’s impossible to get a clear picture as to which changes produced what results. A/B testing should not be confused with multivariate testing. It can also be jarring to users who are already familiar with your interface.

Don’t negatively impact users’ experience whilst you’re trying to improve it. A slow, steady, and measured approach works best, and allows you to fully track your improvements.

Mistake #8: Not accounting for irregular occurrences

Finally, don’t forget to account for the potential impact of irregular occurrences — such as holidays, promotions, search engine algorithm changes, or other variables — on your tests’ outcomes.

Pay attention to what’s going on outside of your A/B testing program. For example, if your website optimisation service is making changes to your website, keep in mind that the ways visitors navigate through your website might change as well. Failing to acknowledge these types of changes will lead to skewed data and faulty decision-making.

Additionally, testing should take place in comparable periods to produce the most meaningful results. If your organisation experiences seasonal fluctuations, for example, measuring on-site performance during peak times and low periods will not generate useful insights.

Still not sure how to get more accurate results from your website A/B testing? Get in touch with Sitback’s experienced team for expert guidance.

An earlier version of this article was published on the Sitback blog at: https://blog.sitback.com.au/blog/8-top-mistakes-marketers-make-with-website-a/b-testing

--

--

Sitback Solutions
Sitback
Editor for

Human-Centred Design & Development Agency, specialising in UX and Web Development, based in Sydney.