Make better decisions with data

You’re dealing with data overload. Some measures are going up, some are going down and there are new things to analyse constantly cropping up. In-between the numbers, heat maps and graphs are people using your services. Minute by minute, hour by hour, they are trying to get things done, things that matter to them. The question is, how do use your data to figure out what’s important to them? How do you know when you have made things better?

Analysing web data is complex and hard to get right, but with a little preparation a world of insights await. Hopefully these three tips can help to get you on the right track.

1. Choosing what to measure

Deciding on the metrics that matter to your service can be difficult. Should you go with sessions, page views, average time on page, exit rates, search rates, conversions?

Getting the most from a metric is all about context: it should be tied to a problem you’ve observed and a solution you are proposing. A metric can be a KPI, but it isn’t always one. KPIs are linked to your core business objectives but metrics don’t have to be. They allow you to make improvements that aren’t necessarily linked to a core business objective. To give an example:

  • Feedback from a user research session shows that people find a landing page confusing. Here your metric might be the percentage of people using your site search from the landing page (people use site search when they can’t find a route to what they’re looking for).

Another way of defining a good metric is to define a bad one. Here are four things that make a bad metric:

  • A vague metric. For example a metric so broad that your changes can’t register an impact, or it would be impossible to prove that your change was the one that made an impact, rather than some other factor.
  • An irrelevant metric. For example you are looking at the conversion rate for page, when you should be measuring page value (an example of the hit vs session level problem).
  • A granular metric. Here we might pick a metric so small and specific that it doesn’t help our business goals, or we can’t gather enough data to make our findings statistically significant.
  • A vanity metric. The ones that are used to impress your manager rather than please your users.

If you’d like to take a closer look at this question, see Avinash’s blog on metrics to die for and super awesome metrics.

2. Write your hypothesis

So you’ve found some great metrics and now you’re ready to optimise… Wait, not so fast! Before you start changing everything you need a hypothesis in place.

People underestimate the importance of having a hypothesis. Without one, at best, your data will be inconsistent, at worst, it will be meaningless. Without a hypothesis we can never be sure that our changes have made the impact we wanted, because we didn’t decide on the impact we wanted before we made the change.

The great thing about having a hypothesis is that it pins you down to one metric and forces you to take a risk. You stake your success on something which you feel will have an impact. If your change goes as expected brilliant, if it doesn’t, never mind. You’ve learnt something new.

You can write a good hypothesis by using the following formula (found on wider funnel):

Changing w:
To x:
Will lead to y:
Because z :

For example, Changing the pdf link into call the action button will lead to more downloads because more people will notice the link.

Two things about hypotheses:

  1. Pick one metric. Hypotheses should be tight, the more metrics you choose the harder it is to get a meaningful result.
  2. Base your hypothesis on data; something you’ve observed.

3. The big reveal

You’ve made a change and gathered the results, hooray. But just one minute, you need to make sure that your results are statistically significant. Statistical significance is one of the most important elements in our role as analysts and digital people.

Without testing for significance we can make the wrong decisions, resulting in lost time, money and morale.

Two basic statistical significance tests to get you started are:

Chi-squared test

Chi-squared (or goodness of fit) tests are easy to implement. They work well if you are analysing independent, discrete categories and have a medium-sized sample, 100- 50000 say. Chi-squared tests do not reveal much about the strength of a relationship between variables, for instance they couldn’t tell us about the strength of correlation between copy length and task completion, say, but they could show that there is a relationship. They are also sensitive to sample size.

T-test / Z-test

T-tests and Z-tests are used to calculate the likelihood that a variable is part of one group of data (distribution) vs another. Typically we would use this test to see if an average has moved after a test has been implemented, ie what is the likelihood that the new average (after a test) is actually an improvement, as opposed to being an expression of the previous distribution (meaning nothing has changed).

T/Z tests work well if you are analysing two groups of continuous data and know their mean and variance, eg average spend for people in London vs Manchester. Things to be aware of are false positives and negatives, and the impact of multiple tests on the validity of your experiments. T-tests are used when you have small data-sets and z-tests when you have large ones.

Right, so those are my three tips for making better decisions with data. If you have any questions on metrics, analytics or anything discussed, feel free to give me a tweet @ifranco_29

One clap, two clap, three clap, forty?

By clapping more or less, you can signal to us which stories really stand out.