Deciding on a metric — measuring what matters
A few points to keep in mind when deciding on what to measure and track.
We have an abundance of data. Events are travelling through pipelines from websites and mobile apps every second. Rows and rows of transactional data filling tables upon tables in our databases. According to the newest Dice Tech Job Report, SQL was the number one most desired skill in tech, in 2019 and the year prior. Data Engineering is the fastest-growing role within the tech industry, with Data Science two places behind, and it doesn’t seem to be changing any time soon.
All of this gives us an almost infinite possibility of creating new metrics to measure. However, it’s not a new phenomenon. A BusinessWeek article titled “Database Marketing” from 1994 talked about “Many companies […] too overwhelmed by the sheer quantity of data to do anything useful with the information”. Twenty-seven years later, transforming data into useful pieces of information is an occupation in itself, with whole teams being created with the sole purpose of making data useful.
To cut through the noise in the most efficient way, we need to decide on what is important to measure and how to measure it.
What makes a good metric
A few days ago I was a part of a workshop whose aim was to come up with a way to measure the success of our campaigns here at Gousto. I thought that it would be a good idea to start that workshop with a primer on what makes a good metric. These are some of the thoughts that I put together.
Metrics should be actionable
In an ideal scenario when a metric changes, we would know if it’s good or bad for the business and when something goes wrong, we should know where to look for the cause. For example, we can (and should) track our number of orders. When it suddenly goes down though, we can look at funnel metrics like cart conversion which will give us a more precise view of what went wrong, speeding up the potential fix.
The opposite of an actionable metric is a vanity metric, examples of which are:
- Running total of customers
- Social media followers
Although these are metrics that are nice to have, they are not actionable. Running total of customers can only go up, whereas a given number of page views can come from a thousand users or a hundred, leading to very different results.
A better version of those metrics (i.e. actionable) would be:
- Active subscriptions
- Engagement per post
- Click-through rates or Pages per session
Questions to ask when considering if a metric is actionable:
- Can this metric lead to a course of action or inform a decision?
- Does it help to achieve a business goal?
Metrics should be comparable
Metrics should be comparable across time. We should be able to say in a few months if the metric improved. Also, we should be able to identify customer segments that under or over-perform on a certain metric.
Most of the time we are interested in ratios and rates when we talk about comparable metrics. This allows us to normalise for the number of people performing certain actions.
It should also be stated what timeframe is being considered. For example, you would not compare 7-day retention to 30-day retention just because it is called ‘retention’. You have to compare like for like (more on that in the next section).
Examples of good vs bad metrics:
- CTR (click-through rate) vs Number of clicks
Questions to ask when considering if a metric is comparable:
- In two months, are we going to be able to say if the metric has improved?
- Can we compare different cohorts of users?
Metrics should be understandable
Metrics shouldn’t be overly complicated, otherwise, people won’t remember them or even understand what they mean and it will be much harder to get an understanding of how they should behave.
Metrics should also be precisely defined so that any ambiguity is eliminated i.e it should be possible to represent it as a mathematical formula where each component has a definition. As mentioned in the previous section, defining a timeframe over which we calculate the metric is crucial for those that can be calculated over different timeframes, like active users (DAUs/WAUs/MAUs).
Questions to ask when considering if a metric is understandable:
- What is the definition of the metric?
- What behaviour would impact the metric?
Metrics should be relevant
Metrics should be aligned with company or project goals — think OKRs, long term company goals, success definitions of a project.
If it changes its value in the desired direction, that would mean we are closer to achieving our goal.
We decide on the success criteria of the projects we run. A good campaign might be one that brings new customers to the product or makes existing customers come back to the product. Whatever it is, the metric we decide on must be tied to that goal so that we can track the success of our actions precisely.
Questions to ask when considering if a metric is relevant:
- Can we decide on a value of the metric that would mean we succeeded on a project?
- If it changes, does it mean we are closer to achieving our goal?
Metrics should be measurable
We need to have data and a way to measure the metric in question.
If we need additional engineering work to gather the data, then maybe it’s better to find a proxy using existing information.
For example, let’s take the ability to track the happiness of our customers — it would most definitely be something worth tracking, but we run into the problem of defining such a value in a succinct manner which everyone agrees with.
Questions to ask when considering if a metric is measurable:
- Do we have the data to measure it?
Above points may be obvious to some, less obvious to others and some (like me) tend to forget about them in the day to day work when juggling multiple tasks. It should serve more as a reminder to have in the back of our heads instead of a rigid set of rules, especially when having conversations about measuring performance and as a way to challenge some of the metrics used in the company.
Thinking about it beforehand saves us time down the line when we realize that the thing we decided on is not necessarily the thing we want to measure or that it’s not a fair representation of our goals.