How to Prioritize A/B Testing Without Overcomplicating It
Test ideas come at you fast. You could be shopping at your favorite ecommerce site or reviewing a recent analytics report, and before you know it you have a new testing idea. The problem is the more testing ideas you get, the more difficult they become to manage.
Testing prioritization makes your testing more efficient. If done correctly, prioritization helps you understand what you should be testing and gives structure to your testing program. Having a good prioritization method in place can also protect the integrity of your testing.
I cannot tell you how many times a CEO has said we should test this or that without any real data behind their idea. Having a prioritization system will allow you to be open to outside recommendations without being derailed. This also shows those who may be unfamiliar with testing that there is–in fact–a science to conversion optimization principles.
Below is a framework that will allow you to better conduct A/B tests. This is not a silver bullet that will work for every business, but instead a foundation that will allow you to customize your test ideas for your business needs. The reality depends on your business type (agency or client) and your business goals (ecommerce or lead generation).
The way you prioritize your tests will look vastly different. Take the following categories and weight them appropriately for your business and its needs. Here are four areas to evaluate when prioritizing A/B tests.
Focus Factor — How will this test affect my primary business goal?
The idea behind the focus factor is that you should test the most important areas on your site first. This also includes prioritizing tests that have the potential to impact the most important visitors on your site.
With the focus factor, ask yourself “are you really focusing on the biggest impact areas on your site?” Below are a few additional questions to guide you:
- How will this test affect my primary business goal? (revenue / leads)
- Does this test affect our sign up or checkout process? (If so, you may see a greater test success)
- How many people on the site will be affected by this test?
- What type of customer segments will be affected by this test? (High versus low converters)
Performance Factor — How well will this test perform?
The performance factor asks if the test you are launching will be successful. If you are like me, you may say “How can I calculate the performance of a test before launching it?” and the reality is you cannot, but there are several things you can do to get a good idea of how it may perform.
Below are a few questions to ask when evaluating impact:
- Are there any case studies that have seen success with this type of test?
- How have similar tests we’ve conducted performed for our business?
- How much customer data (feedback and analytics) do we have supporting this idea?
Effort Factor — What is the level of effort needed to complete this test?
The effort factor asks how difficult this test will to be to implement. Depending on your business, this could play a huge role into how a test is prioritized. If you work at a small agency with limited resources and capacity, you may favor easier test runs. If you work at a large retailer, you may have more flexibility around which tests you can conduct.
Below are a few questions to ask:
- How long will this test take? There are great tools out there like the test duration calculator from VWO.
- What development resources are required to implement this test?
- Will this test affect any crucial areas of the site that could prolong the test?
- Whose buy-in (approval) do you need before getting this test launched? I’ve seen several tests get terminated just because the business wasn’t ready to make a major change.
Risk Factor (Bonus) — How large of a departure is this from your current experience?
There is a saying in testing: “Big Changes, Big Results. Little Changes, Little Results.” While this is definitely true, the reality is when you are running a lot of A/B tests, the tests that gets people most excited are the ones that completely rethink the site experience.
This is not testing something for shock value, but rather understanding customer behaviors and making decisions based on that data, even if the design recommendation is not popular. Bigger departures from the current experience also typically results in faster test results (both good and bad).
So maybe rethink that A/B test for changing the button color on your call-to-action from orange to green, and start thinking about bigger and bolder tests that really shake the status quo of your site.
Ready, set, go!
While there are hundreds of ways to prioritize tests, I hope this framework will provide you with a way to start evaluating your A/B testing quickly and effectively.
If you are a true A/B tester, you know you love to test. And the quicker you can get from idea to implementation, the quicker you’ll be able to improve your overall marketing campaigns.