Recently, whilst discussing a website redesign project, a conversation around A/B testing arose; is it possible to A/B Test a whole page redesign? The answer is ‘yes’ but this method of validating a new design raises a few red flags for me.
Sometimes, testing just one element on a web page might not be something you wish to do. Maybe you’re going through a re-branding exercise and want to see the impact of a new look & feel for your website. Or perhaps a landing page is not performing as well as expected but you can’t establish which element(s) are failing. In these cases, going straight to A/B testing might not be the best solution.
The three main problems which arise when requesting a BIG A/B test are:
- The amount of time & skills required to develop a new layout
- How much time is required to run an A/B Test to significance
- Analysing which elements did and did not work once the results are in
Another potential pitfall of A/B Testing is simply the length of time needed to run a test. You cannot rush an A/B test, typically requiring weeks to gather enough data to reach statistical significance — can you wait that long for your results?
And then we run into another problem: how do you measure impact? Depending on how different the design was, some elements may have changed significantly in both look & feel and position on the page. So how do you measure exactly which change or changes made the impact you’re seeing?
The answer to the above could be in performing user testing before you even open your development tool kit.
What is user testing?
User testing is the process of asking your intended audience what they think of your website or product. There are many methods of user testing, including pop-in surveys and asking customers to perform a task and recording them whilst they complete the task.
In the example of a page redesign, performing user testing would gather enough data to establish a vote of confidence (or not) on whether to move forward with the design (or not.)
It can help to conduct some initial research to establish what doesn’t work on the existing page before feeding that qualitative data to a designer. They should then create a design, and ideally a prototype (a semi-functional version of their design which allows a user to progress through a fake funnel or perform a simple task) to be user tested. Prototype tools exist to speed up the development and presentation of prototypes so that designers do not need to know how to code in order to create one.
The prototype is then presented to users to test and supply feedback on. Again, there are lots of tools and services available to help you complete this step. You can use some of your own users (often cheaper and easier as you know they are part of your target market) or pay a user testing company to find testers which match your user personas. Testing can be completed in-house or remotely, depending on the level of detailed feedback required & staff availability.
If you’ve asked your testers to complete a task, they then attempt to complete it using your prototype, and their session is recorded for you to play back. You can collect the user’s personal thoughts by either asking them to fill in a questionnaire after they’ve finished, or ask them to speak their thoughts out loud as they do the task and record their comments.
Typically with user testing, any major UX issues are discovered with the first 3–6 users, any more and feedback becomes repetitive. This means that user testing is a quick method to gather feedback — much faster than A/B testing.
User testing allows you to gather feedback on a design or product. It can be done iteratively, allowing designers to perfect their design before reaching the developers or engineers. Having qualitative data from users equips you with confidence before implementation, and avoids the work and time required to build and get statistical significance on an A/B test.
Do you regularly perform user testing? Or do you prefer to run large A/B tests? Which user testing methods do you prefer? Please leave a comment.