Why being data led could be doing you more harm than good.
Are you wasting time reporting on vanity metrics? Stop! A short guide on how to spot a vanity metric in your reporting.
Design projects have a problem. The problem is that everyone is too data led these days.
This might seem surprising — surely being data led is a good thing?
Sure, it is if you do it right. But the tracking of meaningless data, or data that exists purely to make your team look good, or data picked because it reinforces decisions you were going to make anyway most certainly isn’t a good thing.
These are ‘vanity metrics’, and they are just noise. A waste of everyone’s time. A waste of time for the poor analyst spending hours producing reports each week that nobody reads. A waste of time to the decision maker who gets no real insight but a misguided comfort that even if it is all meaningless, at least they’re data led.
We’re on a mission to end this culture of accruing vast reservoirs of meaningless data, cut through the noise and focus on what really matters.
So how do you spot a vanity metric? In my experience, there are six distinct types. But I’d love to hear if you’ve found more.
If you’re currently reporting on a number that fits any of these categories, why not stop and see what happens? My bet is very little. Use the time to do something more fruitful, like thinking or resting. If someone notices and complains, challenge them on why. This is a healthy first step to a positive reporting culture.
Type 1: Selective measurement
A perfectly good metric can be ruined through selective measurement. This means applying bias to when or who you measure, rather than what.
A typical example: say you are an insurance company that uses Net Promoter Score (NPS) as a key indicator of customer satisfaction. However, you send your NPS surveys only to customers who choose to renew their insurance at the end of the year. Surprise, surprise, your customers appear to all be really happy.
But in doing this you deliberately exclude everyone who cancelled their policy due to a bad experience, who balked at the price of their renewal or were unlucky enough to have to make a claim and go through the stress and frustration that process inevitably entails.
So what have you learned? People who like you, like you? Vanity metric.
Type 2: Nice Round Numbers (“NRN”s)
NRN’s are so common its really quite surprising. Usually in the form of a target, an NRN is any number picked because it sounds good or looks nice in a report.
Have you ever been given a target like “20% more sales” or “£1m in extra revenue”?
It would be entirely reasonable to ask “Why 20%? Is this feasible? Is it based on a sensible movement from your current baseline?”
Probably not. It just sounds good. Vanity metric.
Type 3: Numbers that only go up
Type 3 is more insidious because they can often initially appear to be a valuable number, but only reveal themselves to be useless as they change over time.
If a metric you’re reporting on only ever goes up, it’s almost certainly pointless.
For example, say you’re tracking the number of visitors to your website, or impressions on a search engine. Assuming your business is viable, those numbers are just going upwards.
A measure is always better if it’s a contextual metric: a rate or a ratio. So the number of visitors per month, or excess share of voice (ESOV) are both more sensible metrics to properly track business health.
If the numbers just go up, you get a warm feeling but nothing else. Vanity metric.
Type 4: Desperate measures
Have you ever been asked to look through the data and find some good news? If so, you’ve hit on Type 4: Desperate measures.
In this case, rather than deciding in advance on what metrics to report on (and then reporting on them regardless of how it looks), the metrics of interest are chosen at each reporting stage depending on what looks good and/or interesting.
Depite being unscientific on the part of the reporter, changing the reporting metrics will confuse the reportee and make it unclear which numbers are actually important (if any).
Setting out a lean measurement framework before starting reporting doesn’t take long, but goes a long way to ensuring your reporting is honest.
Don’t cherry pick after the fact. Vanity metric.
Type 5: Goodhart’s law
I’ve referred to Goodhart’s law before in a previous post, but here it is again:
“When a measure becomes a target, it stops being a good measure.”
— Economist Charles Goodhart
In this scenario, a perfectly good metric is corrupted by those attempting to manipulate numbers to make the outcome look better.
There are many ways a metric can be corrupted in this way, but take the semi-hypothetical example of a country reaching a net zero carbon target. The aim of this target is clearly to reduce emissions in or from that country.
However, a cynical political leader may choose to continue producing as much carbon as before and simply off-set their carbon emissions by ‘choosing not’ to fell a forest. Whether this forest was really ever at risk of being felled is unclear. In the accounts, the numbers suggest the target has been hit, but the numbers are not reflecting the true effort (or lack of) made to meet the Goal.
Gaming the system to make your numbers add up? Vanity metric.
Type 6: Selective framing
Say a new medicine has hit pharmacy shelves. If I were to tell you that taking this new medicine doubled your risk of a nasty side effect compared to a competitor brand, would that put you off?
What if I were to tell you that in absolute terms, the risk had increased from 1 in 100,000 people to 2 in 100,000. Would that change your mind? Yes, it’s doubled, but the absolute risk really hasn’t changed much.
If a newspaper reported that 40% of people in the UK supported a particular political party, does that sound like a lot? What if it was reported as 4 in 10? What if it described as 26 million people? These are of course all ways of presenting exactly the same information, but framed differently.
It’s possible to make any statistic appear better or worse depending on how you frame it.
As with Type 4, choose how you will report on a metric ahead of time. Don’t change the framing to suit any particular narrative, or it’s a vanity metric.
Honourable mention: Bad dashboards
Reporting is time consuming and therefore expensive (and as discussed above, often of dubious merit). Dashboards are often pushed as the remedy to this cost: provide all the numbers in a live or semi-live format and let stakeholders read at their leisure.
And yet, dashboards don’t solve the problem of vanity metrics. If anything, they add yet more noise. A metric being on a dashboard doesn’t make it ‘better’ or any less susceptible to bias. In fact, metrics are often chosen for dashboards in part based on their availability rather than their utility or insight.
How often have you seen a dashboard graph that looks like this?
Or like this?
Do you feel informed?
For the record though, I don’t agree with Jared Spool here that dashboards are inherently junk. The medium is less important than the insights on offer.
If your dashboards aren’t useful, and you really want one, you could always get a new dashboard. One prioritising trends and context over bald numbers and inscrutable graphs.
Conclusion: Focus on what matters
Finding the signal amidst all this noise is getting harder and harder. We need to stop feeding the fire.
When it comes to building confidence, clarity is worth a thousand datapoints. Especially datapoints like these.