Testing is a key part of conversion optimization. It’s the only way to validate a hypothesis, to know what’s really working. Until you test something, you’re only guessing.
You should not implement any website change based on the way looks alone. Or worse yet — you shouldn’t change anything to please your boss or client. It’s impossible to know/predict the impact of a design change in advance. Anyone telling you otherwise is just full of themselves, or clueless.
Testing is the only way to ensure that every change produces positive results. Quantitative data speaks for itself. You need to be measuring the impact that changes have on your metrics such as sign-ups, downloads, purchases, or whatever else your goals may be.
What about just changing something, and seeing if conversion rate will go up or down?
While this is called sequential testing, it’s actually not testing. It’s not apples to apples comparison since it’s not the same traffic nor the same market conditions.
Your conversion rate is not a fixed number, it fluctuates daily and monthly. It’s going to be different for each traffic source. Your traffic sources might be mostly the same week to week, but the exact distribution can vary greatly. So if you measure results by displaying Version A for one week and then Version B for one week, it won’t be an accurate comparison.
You can’t trust the results of sequential testing.
A/B testing and multivariate testing
Largely speaking there are 2 types of testing: A/B/n testing and multivariate testing.
A/B testing (or ‘split testing’) is when you create two versions of a page (page A and page B). 50% of the traffic is showed page A, and the other 50% is taken to page B. This division is done automatically by a split testing software (e.g. Optimizely, Visual Website Optimizer etc).
If a user lands on page A, a cookie is placed on her computer, so that when she comes back later, she will always see version A. This ensures that people won’t really notice that you’re conducting any testing on your website. (Of course, if they delete cookies or switch browsers/devices/computers they can see a different variation — but that’s gonna be such a small sample size that you shouldn’t be concerned by this at all).
When we’re talking about A/B testing, we’re always actually talking about A/B/n testing (e.g. A/B/C/D testing). The more versions you test at the same time, the more time it takes for you to know which one is the best. Speed of testing is also important, so if you have low traffic (e.g. less than 30k / mo), skip the Cs and Ds.
If you test more than two pages against each other, it will take you more time to find the winner.
Limitations of A/B Testing
A/B testing is a versatile tool, and when paired with smart experiment design and a commitment to iterative cycles of testing and redesign, it can help you make huge improvements to your site. However, remember that the limitations of this kind of test are summed up in the name. A/B testing is best used to measure the impact of a two to four variables on interactions with the page. Tests with more variables take longer to run, and in and of itself, A/B testing will not reveal any information about interaction between variables on a single page.
Multivariate testing enables you to test more than 2 combinations at the same time, and the combination of different combinations. Let me explain.
Let’s say you’re testing 2 versions of a headline, 2 versions of a call to action text on a button and 3 different images of the page at the same time (as on the picture above).
So the winning combination could be:
- headline 1, button 2, image 1
- headline 2, button 1, image 3
- headline 1, button 1, image 2
Limitations of Multivariate Testing
The single biggest limitation of multivariate testing is the amount of traffic needed to complete the test. Since all experiments are fully factorial, too many changing elements at once can quickly add up to a very large number of possible combinations that must be tested. Even a site with fairly high traffic might have trouble completing a test with more than 25 combinations in a feasible amount of time.
When using multivariate tests, it’s also important to consider how they will fit into your cycle of testing and redesign as a whole. Even when you are armed with information about the impact of a particular element, you may want to do additional A/B testing cycles to explore other radically different ideas. Also, sometimes it may not be worth the extra time necessary to run a full multivariate test when several well-designed A/B tests will do the job well.
How to Run Test
You can follow five east steps.
Step 1: Research
Step 2: Observe and Formulate Hypothesis
Step 3: Create Variations
Step 4: Run Test
Step 5: Result Analysis and Deployment
What are the Mistakes to Avoid While A/B Testing?
Mistake #1: Not Planning your Optimization Roadmap
Mistake #2: Testing too Many Elements Together
Mistake #3: Ignoring Statistical Significance
Mistake #4: Using Unbalanced Traffic
Mistake #5: Testing for Incorrect Duration
Mistake #6: Failing to Follow an Iterative Process
Mistake #7: Failing to consider external factors
Mistake #8: Using the Wrong Tools
Mistake #9: Sticking to Plain Vanilla A/B Testing Method
What are the Challenges of A/B Testing?
Challenge #1: Deciding What to Test
Challenge #2: Formulating Hypotheses
Challenge #3: Locking in on Sample Size
Challenge #4: Analyzing Test Results
Challenge #5: Maintaining a Testing Culture
Challenge #6: Changing Experiment Settings in the Middle of an A/B Test
15 Things to consider before making your A/B Test
2. Create a strong hypothesis
3. Layout and style of your website
4. Layout Design Elements
8. Call to Action
9. Opt-in forms
10. Social proof
11. Media mentions
13. Navigation Bars
14. Awards and Badges
15. Email Campaigns