HEARRRT — Trade Me’s metrics framework

Trent Mankelow
Trade Me Blog
Published in
4 min readJun 7, 2017

In my first month at Trade Me, one of my first tasks was to come up with a high-level, easy-to-understand, easy-to-apply metrics framework that can be used to gauge the success of product initiatives for any Trade Me business at any level (from massive project to individual feature).

This felt like a daunting task, until I realised that lots of others had been here before, especially in the start-up space. So, stealing the best from the metrics for pirates and Google’s HEART model [PDF], I created a shameless mash-up called HEARRRT. It looks like this:

The HEARRRT model is essentially designed to tease out and stimulate ideas for measuring product success. Each letter stands for a different category of metric:

  • Happiness: A measure of user satisfaction and / or success. This can be attitudinal or behavioural.
  • Engagement: The level of user involvement, typically measured via behavioural proxies such as frequency, intensity, or depth of interaction over some time period.
  • Audience: The number of people coming to the site or app from various channels.
  • Retention: The rate at which existing users are returning.
  • Referral: Whether users tell others about their Trade Me experience.
  • Revenue: Goals that relate to users paying us somehow.
  • Tech: Improve the way we do things ‘under the hood’.

Here’s how you use it:

  1. It starts with choosing which of the HEARRRT categories to focus on. Usually this is pretty easy — is the feature or product designed to improve customer satisfaction (H)? Or make more money (R)? Or reduce tech debt (T)? Sometimes you might choose more than one of the letters.
  2. Next, you move from left to right and select a “higher-level goal” we are trying to achieve. After all, we should be choosing metrics that help us measure progress towards our overall business priorities.
  3. Then, you brainstorm all the potential ‘signals’ that indicate whether we are making progress towards the goal. This might be things like NPS or referrals or traffic or load time.
  4. Finally, we choose one or two of the signals and set a target value we want to hit.

This last step is the trickiest bit. The final product success metrics should be:

  • Small in number. You can’t expect every number to go up-and-to-the-right, so it’s important to focus on the two or three things that you care about above all the others.
  • Realistic. This is one of the hardest ones to get right. How do you know how many report views to aim for, if you’ve never built an insights product before? One idea is to use what data you do have. For example, you might be able to estimate the proportion of visitors who will be interested in downloading a report. Or, you could set a target based on the cost of the project. For example, if the cost of the project divided by the number of reports downloaded means that each report costs $100 then maybe the target is too soft (or you shouldn’t build it at all!)
  • Specific. There should be no wiggle room, no room for interpretation. This includes the timeframe for hitting the metric. Increase sessions by 20% — great. But by when? On launch day? Within 3 months?
  • Easy to measure. I’ve often seen things like Net Promoter Score as a success metric, only to find out afterwards that NPS is something which we only ask for every six months. Your success metric is ideally something that’s at your fingertips.
  • Sensitive to product changes (in other words, easy to prove). If we say a project’s success measure is to increase sold items by 2%, it’s important to have clear line-of-sight between the product change and the increase in sold items. If we can’t hand-on-heart say that a number moved up or down because of a product change we made, then it’s not a good metric.
  • Related to a business goal. Finally, it always pays to do a final sense check to make sure there is a clear link back to the overall business goal too.

One final point — getting better at writing success criteria requires a feedback loop. That’s why we insist on the Product Performance Report, because it’s in reporting back on actual product performance that you realise that your metrics were unrealistic, or didn’t specify a date, or that you can’t accurately measure a change.

--

--