Product development and the potential pitfalls of narrow KPIs

Tom Barbour
5 min readSep 5, 2018

--

Data drives the majority of our decision making these days. It drives our consumption patterns too, surfacing content that’s relevant to us based on our unique data fingerprint. However, in our love and obsession with data, at what point do we stop realising that for every data event recorded, there is a human behind that interaction?

As product managers, we constantly have to work with teams across the business and speak to customers to identify problems and work to discover solutions for those problems. It’s popular for tech companies to organise themselves around small autonomous teams to maintain speed and to operate with the mentality of a start up. These teams typically have KPIs that they’re responsible for and they set targets for moving these KPIs forward.

However, as companies grow and more teams get added to the business, there is a danger that teams become too narrowly focused on a KPI.

You start to look at data both quantitative and qualitative that narrowly relates to the KPI and the micro problem you are trying to solve. What you maybe lose sight of, is the fact that your customer does not interact with your product in such a narrow way. Let me give an example.

I've spent the last few years working in the fintech industry. You can start to have teams organise around a KPI such as customer contact per transaction, number of times they interact with the transaction list, the conversion rate to ordering and activating a debit card. Each of these teams are driving their own KPI forward, and they have a deep understanding of customers ordering a card for example. What's missing, is the truth that customers don't view a product in this micro transaction level. Customers have a job to be done or a task that they need to complete. That might span across all three of those KPIs I mentioned above.

So why is it a problem to have your product teams structured in this way? Well, if you don't have anyone looking at the bigger picture, you end up with a disjointed user experience that neglects to understand that the customer is not using your product to perform three tasks, they are interacting with your product to perform one job. The job is only “done” for the customer when they have their card and have been able to transact on it, not at the point of ordering or activation.

And therein lies the problem. You have these cracks or gaps in the experience where customers transition from activating their card, spending and checking their transaction list.

There’s no way to measure that transition if you’re solely looking at narrow KPIs, but maybe the user experience feels bad, maybe it’s a little clunky. Nobody in your business is looking at this transition because it’s not a KPI they are responsible for or know how to measure. So you end up with these gaps in your product, the overall experience feels off, but taken as a sum of micro transactions it works ok and the data you’re looking at says it’s working ok.

But you’re not sure why customers are churning after 90 days, you’re not sure why you have to acquire customers just to replace those that you lost. Maybe customers can’t articulate it either, the overall experience just feels incomplete and inconvenient, but it’s tough to put a number on or quantify. As product and engineering scales in an organisation I believe it’s vitally important to maintain speed and agility - so I believe in empowering small teams to ship stuff fast. There becomes a tipping point however, when you need to ensure you have systems in place to have oversight of multiple teams.

The example I mentioned above could fall under a wider product group, where big picture visibility of the end to end customer experience is maintained and even if sub teams have smaller KPIs, they're part of a bigger group and aware of the end to end interaction as opposed to getting lost in the weeds of their micro KPI. Many teams are structured in this way, Spotify being a good example as their squads model evolved.

However, many teams also misinterpret the squads model and you end up with a fragmented customer experience where well made parts don't fit together as a whole.

It’s striking the balance between having small, autonomous, fast moving teams but also having big picture oversight and leadership to steer those teams and ensure they aren’t pulling in different directions. I also believe our obsession with data and metrics is hurting product development. Not because I don’t value data and measuring what you build, it’s just that many teams poorly define KPIs and look at a select number of events which do not represent the customer journey as a whole. I’m guilty of this and I’m sure many other PMs out there are guilty of this.

The best teams I've worked with are able to focus their KPIs around retention based metrics, and combine both quantitative and qualitative insights to better understand their customer. For example, of the customers who activated their card and made a transaction, how many of them are still transacting 30 days later. This is a much better indicator of the health of your product as opposed to measuring the activation rate. Activation on its own tells you nothing about the long term stickiness of your product, it merely tells you it was pretty easy to get going. It doesn't tell you whether your product is solving a problem in the long term.

Ultimately, picking the right KPIs for your product and finding the correct blend of insights is more art than science. I believe it shifts as your organisation evolves. What is relevant for a seed stage company with 10 people may not be true when you scale to 100 people and have 100,000 customers. You may then need to evolve when you reach 1000 employees and have 5 million customers. The key is for us to stay humble, and be aware that what worked yesterday may not work today.

--

--

Tom Barbour

Senior Product Manager @TripAdvisor. Previously: Head of Product @Monese, Technical PM @Shazam. Writing about Product Management. All views my own.