Managing Churn with NPS

This afternoon I spoke with my brother-in-law about managing customer churn and the degree to which NPS is a worthwhile tool for the job. He works for a Fortune Fifty company that is solving the exact same problems I’m solving.

NPS Refresher

If you’ve not heard about Net Promoter Score (NPS), here’s your brush up: it is the one customer relations measure used across industries to assess the degree to which folks love you (the vendor). Customers are asked to respond on a scale of 1–10 “How likely is it that you would recommend our company/product/service to a friend or colleague?”.

Designed by (and trademarked to) Bain and Company, it hit the world by storm in the early 2000s when the Harvard Business Review picked it up the relations scoring framework.

In summary, if respondents score less than 7, they’re detractors. If they scored above an 8, they’re promoters. In the middle range (a score of 7 or 8), then they’re “passive”. Subtracting the percent of respondents who are detractors from the percent of respondents who were promoters – ignoring the passives, entirely – you end up with a score between -100 and +100. Industry standards exist, but +16 is a assumed benchmark and +50 is excellent.

Many companies then use this score to reach out to promoters to drive referral volume and work through the detractor list with the goal of deferring churn or preventing it entirely.

The Heartbeat

First, the NPS survey comes with some serious baggage we’ve got to get out of the way. Getting a statistically significant sample is hard, but there are significant biases that come with the implementation of the tool. I.e. how is the question presented, who is the question presented to (e.g. logged in users only, all paying users, or a different set), is an incentive implied, etc.

When I assess NPS scores, I review them on a rolling periodic window. Scores older than that period – for example, one month – don’t factor into my review for the current period. I’m effectively building NPS cohorts, and asking, “how did the NPS cohort of June-2016 perform related to previous months?”. This is alternative to another important question, “what is the NPS for customers added in June-2016?”.

By rigidly defining the time-buckets for the NPS score, I’m able to assess our progress over time, and get an NPS heartbeat.

Intra-period Comparative Performance

Besides comparing performance between periods, we can A/B test and carve out a control group and an experiment group. This is incredibly useful if you’re treating the experiment group different – perhaps to assess the impact of certain initiatives you’re running.

The challenges we run into here are whether we can drive enough responses from the experimental group that our NPS is statistically significant. For a small org, just shoot for a number over 30, and call it good. Of course, this means that you probably need to present the question to 5x that many customers understanding that many will dismiss the question, or for one of the reasons above not see it at all.

At the end of the first period, through the life of the experiment, and beyond, you can compare NPS scores to see whether your efforts have paid off. Do your customers love you more now? If the number is ticking up, that’s great.

Relating This to Churn

I agree that it’s kind of cheap to equate the NPS with a churn metric, but the measurements and rigor of experimentation still stand. The best measure for understanding whether your initiative decreases churn is to just measure churn. But, business is all about touchy-feely forward leading indicators. And, I think that’s a good thing.

NPS is simply a forward-leading indicator with the predicate that customers who promote are less likely to churn. I’m not claiming the truthiness or validity of that statement. Though, we’d certainly expect promoters who don’t churn due to “death or marriage” (going out of business or being acquired), to stick around for a while.

The beautiful thing about this experiment is that we can test a number of initiatives at once; we’re not limited to one experimental group. With good data and a commitment to the initiatives, we can even assess the complimentary effects programs might have on each other.

Ultimately, NPS answers a tangential question, a proxy for “how likely are you to churn”. In a relatively short period of time, with limited budget and with a standard tool, you can quickly gauge the productivity of your initiatives and prioritize the winners.