Data-driven UX decisions

Building better experiences through obsessive research and testing.

Nick Boyce
Product management and technical leadership

--

We all make countless decisions every day in our jobs. Which of the hundreds of reasonable things in my list will provide the highest value? How am I going to find the best solution for our customers? How will we know if we have even solved the problem?

Meanwhile, with every one of their interactions with you, your customers are constantly sending you signals that can be used to make better decisions. A solid testing program depends on using data first, and opinions second. As Jim Barksdale, former CEO of Netscape famously said:

If we have data, let’s look at data. If all we have are opinions, let’s go with mine.

So let’s dig in.

Research

Customer research

We must understand our customers in order to solve problems for them.

Daily survey. Have a member of your team phone a small handful of customers and ask them five questions. Record the answers in a Google Doc.

Though it’s a small sample size, it provides useful qualitative information which helps us spot patterns in individual customer stories in a way that we could never get from an Analytics funnel or aggregate market report.

Market research. Periodically it can be useful to conduct large-scale market research which provides aggregate information on our customer demographic. These are less frequent but a much higher sample size than our daily surveys.

Google Analytics, KISSmetrics, Crazy Egg and custom reporting

Google Analytics (from here on, GA) Mixpanel and others can provide detailed data on how customers are interacting with your site.

Do and measure. Some things are not A/B testable, we don’t want to clog up the testing queue, or we’re confident enough about our hypothesis that we will accept measuring after it’s launched.

For every significant feature we create a GA annotation so we can do before-and-after analysis to check that everything went as expected. GA and Mixpanel provide different lenses (GA tells you what’s happening, Mixpanel tells you who’s doing it). Hotjar and other tools are also invaluable for session recording and heatmapping.

Funnels. Funnels are ways of modelling a user’s journey through the key parts of your site. Keep a regular eye on these to try to identify under-performing steps that might represent problems or opportunities.

Custom reporting. To find out very specific answers, might involve data wrangling in SQL or Excel. For example: finding out the average number of line items in our abandoned baskets, understanding the relative performance of the main navigation vs the various search functions.

Testing

UserTesting.com

We use Usertesting.com to conduct remote user interviews. These allow us to observe people using the site as they follow a list of tasks. It gives us the “why” that is difficult to measure through metrics alone, and we predominantly use it to validate new UX ideas, as well as occasional competitor comparisons.

There is no substitute for pen and paper

Watch the videos together as a team and share your findings. It can be extremely humbling to you find out your brilliant idea makes no sense to the user, but it’s better that these lessons are learned together.

Test your iterations to gain confidence. Google Ventures’ Jake Knapp puts the emotional journey of user testing perfectly in this post:

First session: “We’re geniuses!” or “We’re idiots!”

Sessions 2–4: “Oh, this is complicated…”

Studies 5-6: “There’s a pattern!”

By running new tests with each of our major iterations we can see quick improvement which usually results in us building enough confidence to launch (though we occasionally reject ideas that don’t test well).

These are not real customers. It should be noted that these are not your customers, nor are they in the same frame of mind that your customers will be, so don’t treat this as customer research.

A/B testing

A/B testing can give us the what to the user testing’s why, by providing a framework to validate a hypothesis.

Much has been written on the topic, and it’s easy to believe it can be used to unearth hidden conversion opportunities that will make you millions. The truth is, not everything is suitable for A/B testing, and there is a fine art to planning tests properly.

Start with a strong hypothesis. Don’t use a scattergun approach, this should be something you are confident about that is backed by data.

Ensure that the hypothesis is testable. A good A/B testing candidate should be noticeable change, with decent traffic and an expectation for a double-digit improvement to a metric close to the area you are affecting (i.e. don’t expect a change on your category page will have a measurable effect on overall conversion through your checkout).

We usually calculate the test plan using experimentcalculator.com.

Agree the success metric upfront. Don’t leave it open to interpretation, don’t let ego get in the way, and be willing to lose. The truth can be humbling.

Use more than one source of data. No matter how well you plan, data can get ambiguous very quickly, so you may want to use two or more sources of data (for instance GA segments and Mixpanel data) for sanity checking.

Sometimes you have to go with your gut

We don’t make every decision using data. Sometimes it just isn’t possible, particularly with creative decisions, and in other times there isn’t a lot that can be learned by analysis.

There is never absolute truth in numbers, and it’s best to just go with your instincts and have the confidence to push forward.

--

--

Nick Boyce
Product management and technical leadership

Founder of Pollenary. Acquisition, analytics, research and optimisation. Maker of web things and collector of prints.