How to simultaneously run A/B tests
When companies first get on board with the idea of CRO and in particular, A/B testing, they can, on occasion, get over excited and end up running multiple tests at the same time. Seeing test results come in is addictive, but without proper planning and supervision there is a chance that running multiple tests at the same time might skew your results.
Test Clash occurs when more than one test is running at the same time and it either breaks the website from a coding perspective, or results in unreliable result data. For example, if you’re testing both a new CTA design and a new modal window on the same page then they might both be seen by one user. If that user goes on to convert, how do you know whether it was the CTA or the modal which helped convince them to convert?
Ideally you would only run one test at a time. This would allow for thorough test QA and prevent website bugs. It would also remove any doubt surrounding your test results. But in the real world, businesses do not have time to wait for one test to finish before beginning another test. As much as CRO managers would prefer a one test approach, it’s often not feasible for the business.
Most A/B testing platforms do claim to mitigate the effects of test clash by segmenting audiences (so a user won’t see multiple tests running on the site) or accounting for multiple tests when processing results, but it’s better if you’re able to avoid test clash yourself. Here are some tips that will allow you to simultaneously run multiple A/B tests.
Testing For Different Segments
Even if A/B testing tools already do this automatically, there’s no reason you can’t also segment your audiences (just to be extra careful.) And really, customising tests for different personas is something you should be doing already; a mobile users experience is going to be completely different to a desktop user. A customer you’ve identified as one persona is going to be searching for something different to another. Personalising your A/B tests for segments not only helps to avoid problems with results, but will also ensure your customers are being catered for on an individual level and will improve your conversion rate further.
- Device (mobile, tablet, desktop)
- New visitors vs returning
- VIP customers (gold or silver card holders)
- Customers who fit within a defined segment, eg. for travel; single travellers, couples, families or groups.
- Age groups or gender
- Product type (home insurance vs car insurance, shoes vs handbags, all inclusive holidays vs flight only)
The downside to segmentation testing is that your traffic numbers will be lower and this may have an impact on how long your test needs to run for. However, this is my preferred method for avoiding test clash when A/B testing as it builds on personalisation techniques, and if segments are chosen wisely, then the impact on the length of the test can be negligible.
Testing on different pages or funnels
By running tests on different pages you can reduce the dreaded test clash, especially if you’re testing different funnels. Sometimes websites have multiple funnels which drive overall conversion rates. The main focus might be on purchases, but account creation or acquisition can also be a big driving force in your business.
Running tests on a product page will likely impact purchase conversion, meanwhile, testing form layout on an account creation page can help improve that funnel. Acquisition testing covers anything from driving traffic to the site (eg. PPC split testing) to collecting e-mail addresses for marketing purposes.
Tests can be run simultaneously if they’re on different pages of the site and there is a lower chance that bugs will be created this way. It is less likely to affect results if the goals are aligned with different funnels.
Adjust your KPIs
Similarly, if your test KPIs are different, your results are going to be focused on the individual goal. It’s important to ensure that the KPIs or goals for each test don’t impact each other. Examples:
- Drive engagement with a CTA or area of the website
- Improve RPV (revenue per visitor)
- Drive progression to the next step or along the funnel
- Increase account signups
- Reduce customer service calls/complaints
These are alternatives to the ultimate goal of increasing purchases, but all are valid to business success. Creating tests which aim to improve alternative KPIs to conversion, you can focus on improving individual goals and run tests simultaneously.
I recently wrote about the importance of prioritisation when testing, and building a roadmap can help manage your testing schedule ensuring as many experiments are run whilst avoiding test clash.
I suggest a 3–6 month roadmap of what tests are going to be run and when, taking test clash into consideration (alongside other measurements of priority.) When you know what is scheduled and can guarantee tests won’t conflict then you keep management happy.
My final technique for avoiding test clash actually incorporates the idea of multiple tests into one. Multivariate testing is the practice of changing multiple things on one page and including all the changes into one A/B/C/D/E/etc test, also known at multivariate testing.
Imagine a truth table, and each box column is an element to be tested on the page, and each row is a version of the test (A/B/C/etc.) The key is to ensure every possible variation of an element is tested against another. The more elements to test the more complex, and also the longer the test must run…again, MVT tests can take a long time to reach significance as traffic becomes diluted between the variations.
However, as a method to avoid test clash, it works.
Test clash can be avoided using lots of different tricks, or just by relying on your A/B testing solution to segment your traffic for you. However, it is something you should always be aware of when trying to optimise your website. Unless you’re running one test at a time, there is a chance that one test will impact another’s results.
If the business has the patience, then one test at a time is the best method. If not, then explaining the risks of test clash to management and then using the above techniques to mitigate that risk is your best option.