The Magic Question That Unsucks* NPS

Josh Seiden
Sense & Respond Press
6 min readDec 10, 2017

What do you do when leadership asks you to improve the NPS (Net Promoter Score) of your product or service?

NPS, if you’re not familiar with it, is a standardized questionnaire used to measure customer satisfaction. It asks customers, “On a scale of zero to 10, how likely are you to recommend our product or service to a friend, partner, or colleague?” (For a more thorough explanation of NPS, start here.) And it spits out a simple numerical score that you can track over time.

Last week, I worked with a team facing just this challenge, and honestly, they were stuck. One of the things that makes NPS appealing is that simple numerical score you can track over time. But that simplicity is also a liability. What does a score of “50” mean? What about “70”? How do I get that number up? That’s where the team was stuck — what were they going to do that would influence NPS?

Doesn’t NPS Suck?

Indeed, NPS is one of those things that people either love or hate. Consider this Tweet from Jared M. Spool:

Hard to tell from this Tweet what Jared thinks of NPS.

Now I’m not here to argue that NPS is either good or evil, because for many teams, that’s a moot point. If you’re working in a company and you’ve been handed a mandate to improve NPS, you’ve got to figure out what to do with that mandate. You can’t just say, “NPS sucks!” drop the mic, and walk away. So here’s a way you can turn that vague mandate into an actionable plan.

Leading Indicators vs. Trailing Indicators

First, you need to understand that NPS is a trailing indicator. It tells you how well you’ve done after you’ve done it.* But it doesn’t have any predictive power. It can’t tell you what you should do in order to increase customer satisfaction. For that, you need to identify your leading indicators.

Here’s an example. Let’s say you’re an eCommerce company. At the end of the day, you measure sales, revenue, profitability, and customer satisfaction. These are all trailing indicators. (What we call “Impact Metrics” in Lean UX.) They tell you if things went well or not. But they don’t predict future sales, or future satisfaction. So what can you measure that predicts increased sales?

Perhaps at this eCommerce company, you observe that when people read reviews on your site, they are more likely to buy things. So the rate at which people read reviews is a leading indicator. Knowing this, you’d want to do everything in your power to increase the rate of review reading. Or maybe you discover that when people save an item into their shopping cart, they are more likely to proceed to checkout. In this case, moving things into the cart becomes a leading indicator. You can measure the rate at which people do this, and then you can start changing things in order to encourage this behavior.

The critical word here is behavior. Your leading indicators are customer behaviors that you can measure, and that you can influence through design, copy, promotion, etc.

From Trailing to Leading Indicators with The Magic Question

So when you’re trying to improve NPS, the magic question you need to ask is, what are the customer behaviors that predict satisfaction and thus lead to high NPS? This is a question that you can address in concrete terms.

The magic question: what are the customer behaviors that predict satisfaction and thus lead to a higher NPS.

With my client recently, we used this question as our starting point.

First, identify baseline behaviors

The first step was to just understand current customer behavior. To do that, we created a customer journey map. We basically laid out an end-to-end, step-by-step picture of the steps customers take as they interact with the service they offer.

Customer Journey map: identifying the behaviors in the system

My client runs a two-sided marketplace. So we created the map with three swim lanes: seller behaviors, buyer behaviors, and organizations/system behaviors. This map is important, because it establishes the facts, and the raw material we have to work with: the behaviors of the people and systems that make up the platform.

Then, identify boosters and blockers

With that mapped out, when then went back through the map and asked two questions. What behaviors at each step predict success and satisfaction? And what behaviors at each step predict failure and dissatisfaction. We wrote the success factors on green stickies, and we wrote the failure factors on red post-its.

Annotating the map with successful behaviors we want to encourage and obstacles we want to remove.

For example, when buyers and sellers meet in person early in the deal, things go better throughout the process. We also noticed that buyers and sellers get stuck at one particular step in the process. (Let’s call this “obstacle x” for the sake of this article.)

With these insights, we were able to ask actionable strategic questions: how might we encourage buyers and sellers to meet in person earlier in the process? And, how might we eliminate obstacle x, which is causing buyers and sellers to get stuck?”

Goals Transformed

Now we have questions that inform concrete work. Instead of the vague instruction to increase NPS, instead we have a much more actionable set of prompts: we want to increase the rate at which buyers and sellers meet early in the process. And we want to decrease the rate at which buyers and sellers encounter obstacle x.

Both of these goals are framed around leading indicators, and both of them are expressed in terms of very specific and measurable rates of behavior.

Finally, because we’re not yet certain that any of this work will actually impact NPS, we want to express all of this as a testable hypothesis, as follows:

We believe that if
we increase the rate at which buyers and sellers meet early in the process,
it will lead to more successful transactions (as measured by X) and
higher user satisfaction, (as measured by NPS.)

And now we’re all set up to start designing and testing solutions to see if our hypothesis is right or wrong.

This Works for Any Trailing Indicator

One final point here, you can use this method any time you’re handed a trailing indicator. You are asked to increase sales? Use the magic question: What are the behaviors in the system that predict higher sales, and how can we go about encouraging those behaviors? You’re asked to increase revenue? Use the magic question: What are the behaviors that lead to higher revenue and how can we go about encouraging those?

* Oh Yeah, That Asterisk

Does NPS actually suck? I’m just going to step aside and let others have that fight. But maybe start here with Jared’s thread (and the responses) on Twitter.

What Do You Think?

How does your team do this? How do you translate NPS into actionable plans? Do you think the method might help? Why or why not? I’d love to hear your thoughts in the comments below.

Want to read more articles like this? Follow Sense & Respond Press here on Medium, subscribe to our mailing list, or check out our short, practical books on product management, innovation, and digital transformation. You can also hire me to help your team work through problems like this, create strategy, and translate strategy into action.

--

--