Why Is A/B Split Testing Crucial To Success? — Part 3
The Advanced Guide
In my previous guide, I took you through A/B Split Testing in more detail. I highly recommend that you take the time to read through “Why Is A/B Split Testing Crucial To Success? — Part 2 (The Intermediate Guide)” before going onto reading the final article in this series.
In this article, I will take you through the most common mistakes even the best brands have made while conducting A/B Split Testing, to help guide you on the path to increased success using this process in your own efforts.
“The biggest mistake salespeople make is being inward-out versus outward-in.” — Greg Alexander ( Entrepreneur) @GregAlexander
What Should You Watch Out For When Implementing A/B Split Testing?
In this article, I will cover…
#1 Common Mistakes Brands Make When A/B Split Testing
#2 4 Reasons Why Your A/B Split Tests Aren’t Working
Let’s dive in…
___________________________________________________________________
#1 Common Mistakes Brands Make When A/B Split Testing
A/B Split Testing is fun. With so many easy-to-use tools around, anyone can (and should) do it. However, there’s actually more to it than just setting up a test. Tons of companies are wasting their time and money by making easily avoidable mistakes.
1. Calling A/B Tests Too Early
Whether you’re running a test for a month, or longer, you should understand how statistical significance affects your testing calls. For example, if your test indicates that your version B is more successful than version A, you need to be sure that you’ve measured the results against a large enough audience size in order to accurately determine your results. If your audience test size is too small, even if your results are high, this can cloud your data and lead you down the wrong path. By using a tool to determine your statistical significance baselines, such as Optimizely for example, you will be able to measure your test results more effectively, helping you to make a judgement call based on more than just your results data. If your results don’t meet your statistical significance benchmark, you will be able to determine whether you need to run your test for longer, or alter your test and save this one for another day.
Watch out for A/B Split Testing tools “calling it early” and always double check the numbers. The worst thing you can do is have confidence in data that’s actually inaccurate. That’s going to lose you money and quite possibly waste months of work.
2. Ending Tests Too Early
Let’s say you are running your tests on a website or landing page and have a high traffic site. You achieve 98% confidence and 250 conversions per variation in 3 days. Is the test done? Not at all. The bare minimum time frame for running tests is 2 weeks, but personally, I am in favor of a 1 month test run. Why? Because a 1 month run will allow you to measure performance more accurately based on varying user behaviour on different days of the week or at different times. A 1 month test sprint will help you gather data on more than just the highest traffic days, helping you gain more valuable and actionable insights by getting a more realistic view of your test’s performance over time.
The only time when you can break this rule is when your historical data says with confidence that every single day the conversion rate is the same. But it’s better to test 1 week at a time even then.
Always pay attention to external factors
Is it Christmas? Your winning test during the holidays might not be a winner in January. If you have tests that win during shopping seasons like Christmas, you definitely want to run repeat tests on them once the shopping season is over. Are you doing a lot of TV advertising or running other massive campaigns? That may also skew your results. You need to be aware of what your company is doing.
External factors definitely affect your test results. When in doubt, run a follow-up test.
3. Not Basing Tests Based On A Clearly Defined Hypothesis
I like spaghetti. But spaghetti testing (throw it against the wall, see if it sticks) not so much. It’s when you test random ideas just to see what works. Testing random ideas comes at a huge expense — you’re wasting precious time and traffic. Never do that. You need to have a hypothesis. What’s a hypothesis?
A hypothesis is a proposed statement made on the basis of limited evidence that can be proved or disproved and is used as a starting point for further investigation. And this shouldn’t be a spaghetti hypothesis either (crafting a random statement). You need to complete proper conversion research to discover where the problems lie, and then perform analysis to figure out what the problems might be, ultimately coming up with a hypothesis for overcoming the site’s problems.
If you test A vs. B without a clear hypothesis, and B wins by 15%, that’s nice, but what have you learned from this? Nothing. What’s even more important is what we learned about the audience. That helps us improve our customer theory and come up with even better tests.
4. Not Monitoring Your Test Data In Conjunction With Google Analytics
Averages lie, always remember that. If A beats B by 10%, that’s not the full picture. You need to segment the test data, that’s where the insights lie. While Optimizely has some built-in segmentation of results, it’s still no match to what you can do within Google Analytics. You need to send your test data to Google Analytics and segment it. No matter if you are using A/B Testing for your website, outreach emails, Facebook ads or more, you should always, always, always be sending data to Google Analytics. Even if you have a third-party or platform specific analytics tool in place, having your data monitored across different platforms and tools will help you more accurately consolidate and measure the results of your tests.
5. Wasting Time And Resources On No-Brainer Tests
Are you running tests on different colours and fonts? Stop.
There is no best color or font out there, it’s always about visual hierarchy, balance and appeal. Sure you can find tests online where somebody found gains via testing colors or font, but they’re all no brainers. Don’t waste time on testing these elements, just make sure you follow basic visual best practices, and implement. Rather than wasting time on colour and font tests, experiment with which images appeal best to your audiences, what content increases their engagement, what call-to-actions drive more conversions, etc. Trust me…these types of tests will make more of an impact on your bottom line.
6. Giving Up After Test 1 Fails
You set up a test, and it failed to produce a lift. Oh well. Let’s try running tests on another page?
Not so fast! Most first tests fail. It’s true. I know you’re impatient, so am I, but the truth is iterative testing is where it’s at. You run a test, learn from it, and improve your customer theory and hypotheses. Run a follow-up test, learn from it, and improve your hypotheses. Run a follow-up test, and so on.
If the expectation is that the first test will knock it out of the ballpark, money will get wasted and your team will get disheartened. But it doesn’t have to be that way. Just remember that testing never ends, you need to continue your efforts to optimize whether your tests fail or not.
7. Running Multiple Tests At The Same Time With Overlapping Traffic
You found a way to cut corners by running multiple tests at the same time, by running multiple tests or multiple test elements within one sprint. This is a sure fire way to set your tests up for failure. The general rule of thumb is to run one test for one hypothesis on one page, for one audience, for one campaign, or for one ad set at a time…all the while only altering one (that’s right, you read right, one) element per variation at a time.
Yes, this means that your testing process will be a lengthy one, but it helps you gain insights on unskewed data.
8. Ignoring The Small Wins
Your treatment beat the control by 4%. “Bhh, that’s way too small of a gain! I won’t even bother to implement it”, I’ve heard people say.
Here’s the thing. You’re not going to get massive lifts all the time. In fact, massive lifts are very rare. Most winning tests are going to give small gains — 1%, 5%, 8%. Sometimes, a 1% lift can result in millions of dollars in revenue. It all depends on the absolute numbers we’re dealing with. But the main point in this: you need to look at it from a 12-month perspective.
One test is just one test. You’re going to do many, many tests. If you increase your conversion rate 5% each month, that’s going to be an 80% lift over 12 months. That’s compounding interest. That’s just how the math works. 80% is a lot.
So keep getting those small wins. They will all add up in the end.
9. Not Running Tests All The Time
Every single day without a test is a wasted day. Testing is learning. Learning about your audience, learning what works and why. All the insight you get can be used in all of your sales and marketing efforts such as your website design, outreach emails, PPC ads, creatives and more.
You don’t know what works until you test it. Tests need time and lots of it.
Having one test up and running at all times doesn’t mean you should put up bad tests. Absolutely not. You still need to do proper research, have a proper hypothesis and so on. Have a test going all the time. Learn how to create winning A/B Split Testing plans. And never stop optimizing.
10. Not Keeping Track Of Tests
The last mistake that I’ll mention here is probably one of the most important and most common. Not keeping track of your tests. Best practice states to have a test calendar in place where you not only keep track of which tests you will be running for which element of your campaign and when, but of the results each of these achieved and how the results have impacted your next tests.
A quickly set up Google Sheet is more than sufficient if you don’t have the time or resources to invest in tools, as long as you can keep track of your experiments and your results, you can’t go wrong.
See what industry leader Instapage has to say about AB Split Testing here.
#2 4 Reasons Why Your A/B Split Tests Aren’t Working
Did you know that only 1 out of every 8 A/B tests manage to produce significant results? And that’s at a professional level! Imagine how the statistics must look for people just getting started. There are a number of reasons why your A/B tests aren’t giving you the results that you’re looking for. It could be the specific test you’re running, but then again, it might go a little deeper than that.
In this section, I will take you through the top 4 reasons your A/B tests might not be working, and how you can get back on track today.
1. You Only Copy Other People’s Tests
Of course while reading an article on A/B Split Testing examples can be a great way of sparking your imagination, you need to remember that what works for one business may not work for another, as each company and effort is unique. You should always view other brand’s results with a grain of salt and considered in relation to:
- The offer
- The design of the page
- The audience viewing the test
- The audience’s relationship with the business at hand
Just because something worked for someone else doesn’t mean that it will work for you. Take the time to consider why something worked rather than jumping the gun to implement new tests based on what you’ve heard about online. An arrow isn’t necessarily better than not having an arrow. A background image isn’t necessarily better than no background image.
It’s all about context, and that’s the context that only you know best.
2. Testing Too Many Variables At Once
They say less is more. Well that’s definitely true about A/B Split Testing. By testing less elements at a time, you’ll get…
- More clarity about what caused a specific change
- A more controlled experiment
- Less volatility within your overall conversions / revenue stream
It might be tempting to test multiple elements on a page at a time. There’s nothing wrong with this and there’s actually a name for it, multivariate testing. But with the potential upswing of a massive increase in conversions, you also run the risk of a massive decrease in conversions. This is especially troubling for businesses that rely on their landing pages and websites to contribute a significant portion of their business. If you want to run a true A/B test then you need to focus on testing one element at a time.
While you might be able to get a more significant change by changing multiple elements at once, testing more than one thing at a time results in sloppy unanalytical tests that yield no conclusive results.
Stick to one test at a time. Your team will thank you later.
3. You’re Driving The Wrong Traffic
There’s two main components of an A/B test: the variations you’re testing and the users who are viewing them.
Most marketers get so bogged down with their actual test that they forget to consider the other side: the people behind the test. You need to know where your traffic is coming from and what they’re looking for. Don’t be so quick to call it quits on an A/B test especially if you haven’t considered who you’re testing on. You’ve heard of audience targeting, and the same is true to run successful campaigns…in order to see optimal results, you need to be delivering your tests to the right audience for your efforts.
4. Not Proving Your Tests Completely
There’s no better feeling than having one of your A/B tests destroy the original so that you can go on to crown yourself “King of Conversion.”
But before your inauguration, you might want to double check your findings to ensure that the results weren’t just a fluke and that you’ve actually drilled down to some meaningful results.
This is especially true for tests which surprised you from their sudden landslide victories with a high level of statistical significance. These are the types of tests which you might be tempted to quickly declare a winner and move on. But even if this is the case, and you’ve wrapped up a test with 95% confidence, there’s still a 5% chance that the test could come up a false positive.
Be careful to not jump to a conclusions (in these cases particularly) if the results are something that you’ll be applying globally across your efforts, or as a part of your widespread optimization process.
High-impact test results should be proven beyond a shadow of a doubt by running the test again. Make absolutely sure there’s not an unidentified variable before implementing across your entire site. If you’re running tests to 95% significance, there’s only a 1 in 400 chance that you’ll get a false positive twice.
___________________________________________________________________
Thank you for reading this article series. I hope that you have a clearer picture of why A/B Split Testing is crucial to your sales and marketing efforts in order to increase your chances for success. We are constantly working to bring you more resources to help your brand hit it’s revenue targets. Browse through what we’ve published so far on our publication, Inside Revenue, and check back regularly for updated. We wish you all the best in your brand’s future sales and marketing efforts.
General Resources
“How Do Great Brands Develop Their Ideal Customer Profile? — Part 1(The Beginner’s Guide)”
“How Do Great Brands Develop Their Ideal Customer Profile? — Part 2(The Intermediate Guide)”
“How Do Great Brands Develop Their Ideal Customer Profile? — Part 3 (The Advanced Guide)”
“Why Is A/B Split Testing Crucial To Success? — Part 1 (The Beginner’s Guide)”
“Why Is A/B Split Testing Crucial To Success? — Part 2 (The Intermediate Guide)”
“Why Is A/B Split Testing Crucial To Success? — Part 3 (The Advanced Guide)”