There are many variations on the Golem legend, but it usually goes something like this: a village is under frequent attack, so they make a superhuman creature out of clay and put it to work defending them. But because the powerful Golem can only follow orders unthinkingly, it eventually becomes a threat to the very people who created it. The golem is the folkloric tradition upon which Frankenstein’s monster was built, and carries a powerful warning that the things we create to protect us can become our destroyers.
In today’s data-driven business world, we face an increasing demand for hard numbers and quantitative scores. And in our efforts to meet this demand, we have created a few monsters. Net Promoter is one of them.
We’ve Created a Monster
Once upon a time, there was a village of marketers under attack from marauding executives. These executives demanded hard numbers, quantitative scores, and “real data.” “Where’s the ROI?” they shouted, wielding org charts and business cards from outside vendors.
The marketers tried to defend their village with insights drawn directly from customers, but this did little to assuage the marauding executives. So the marketers banded together and created something out of impressive-sounding correlations and business speak — something stronger and more powerful than any mere human customer. They wrote on its forehead words from their sacred text, The Harvard Business Review, and it came to life in service of defending the legitimacy of their work.
The marketers’ creation did exactly as instructed. It always produced a hard number, a quantitative score that would keep the marauding executives at bay. But the marketers’ creation had no way of knowing whether its power was being used for good or evil; it simply followed instructions, score after score, company after company. And soon, this creation became so powerful that the marketers themselves could no longer control it.
As monsters born of hubris go, Net Promoter has an on-point origin story: it was introduced to the world at large in a 2003 Harvard Business Review that promised to reveal “The One Number You Need to Grow.” Net Promoter is a simple and highly prescriptive process, steadfast and unflinching in its repeatability and monolithic certainty. Here’s how it goes: you ask your customers “how likely are you to recommend [product or service] to your friends or colleagues?” You then provide an eleven-point scale ranging from 0 to 10. To calculate your NPS or “Net Promoter Score,” you count all the 9s and 10s (“promoters”), subtract all the 0s through 6s (“detractors”) and take it as a percentage of the total number of people who answered the survey, including the 7s and 8s (“passives”).
If you’ve used any product or service in the last ten years, you’ve probably found yourself presented with a survey like this. And if you’ve worked in or near a marketing department in the last ten years, you’ve probably fielded at least one question about Net Promoter. Net Promoter has become ubiquitous; the standard for measuring customer loyalty.
This very ubiquity is a huge part of what makes Net Promoter so attractive. It’s a system with an official-sounding name that consistently produces a measurable quantitative output. The score it produces can be easily benchmarked against that of any other company. And this is why, no matter how many times it is critiqued and debunked, Net Promoter only seems to grow in power and pervasiveness. The primary value of Net Promoter is not how effectively it predicts customer loyalty, but rather how effectively it covers your ass. If you choose to use Net Promoter, it is unlikely that you will be held personally accountable for whether or not the resulting score accurately tracks growth. Net Promoter helps you fight off difficult conversations about the relative strengths and limitations of quantitative modeling— and that’s exactly the problem.
When Data Models Don’t Match Social Models
Think about the last time you recommended a product to somebody. Was it a qualified recommendation (“If you’re looking for an Android phone and don’t care too much about battery life, check out the Nexus 5X”)? Would you necessarily recommend the same product or service to a friend and a person from work? What would be the difference between a product that warrants an “8” response (classified as “passive”) and a “9” response (classified as “promoter”)?
These are all important, messy, qualitative questions which Net Promoter seeks to obviate by asking a single high-level, broadly applicable question. But by oversimplifying the multifaceted and highly variable human context around recommendation, Net Promoter falls into one of the biggest pitfalls of the “data-driven” age: it puts forth a data model that does not accurately reflect the underlying social model. When’s the last time you thought to yourself “I am likely to recommend this product to my friends or colleagues” as opposed to something like, “I can’t wait to tell my friend Tricia about this new slow cooker because I know that she doesn’t like to cook things on the stove”?
These little details may seem inconsequential, but they are often the core qualitative differentiators that lead one product to success and another to failure. In its attempt to consistently quantify a complex social dynamic across a divergent and far-reaching set of products, services, and audiences, Net Promoter presents a dangerous but seductive fantasy: that a single question or system can solve messy human problems and give us all the information we need. And the more individuals and companies accept a system that fails to answer mission-critical questions about how and why customers recommend things, the more these questions are collectively dismissed and devalued.
What’s In a Number?
A few months ago, I started doing a bit of very informal qualitative research around Net Promoter, having short chats with a few folks who had just filled out NPS surveys. In just a few conversations, I found that in many cases the 7s and 8s — classified as “passive” by the NPS system itself — might actually be the most actively engaged recommenders.
While many of the 9s and 10s I spoke to were simply big fans of the kind of product or service being offered, it was the 7s and 8s who often knew the most about the product or service at hand. Sometimes, they could think of a specific set of people for whom they would recommend the product or service, leaving them hesitant to answer with a 9 or 10. Sometimes, there were specific things they preferred about competing products or services, leaving them hesitant to answer with a 9 or 10.
These are the people who Malcolm Gladwell might call “mavens” — people who are highly motivated to offer qualified, contextual recommendations. And these are, often, the people whose recommendations we trust the most.
To put this in a more personal context: I am a fanatical recommender of New York restaurants. But there is almost no New York restaurant that I would recommend to everybody. There are some places I recommend for romantic dinners, some for low-key family meals. There are some places I would never recommend to a vegetarian, and some places I would never recommend to somebody without a tolerance for spicy food. Friends take my restaurant recommendations seriously, in large part because these recommendations are based in specific contextual needs. And there is practically no restaurant to which I would give a 9 or 10 in a Net Promoter survey.
Is this peculiar to me? Is this peculiar to restaurants as a type of product? Were the short conversations I had anomalous or biased? Is a qualified but highly informed recommendation ultimately worth less than a broad but uninformed recommendation? I have no idea — but these are the exact kinds of questions I would want to dig into before reducing something as important as customer loyalty to a single number.
A New Approach to NPS
None of this is to suggest that Net Promoter is entirely without merit. As a set of rough guidelines, a high-level benchmarking tool, or even a way to identify subjects for qualitative research, it can be very powerful. Here are some tips for making sure that Net Promoter (and other data golems) work for you, rather than crushing you under their mighty golem feet:
- Track Your Results
Rather than taking NPS at face value, Airbnb actually measured the predictive power of Net Promoter against their own proprietary data sets. They found that, while NPS alone was a decent predictor of future booking, it was not as strong a predictor as the model they built using their own comprehensive review data. They summarized: “Given the extremely low number of detractors and passives and the marginal power post trip LTR [Likelihood to Recommend] has in predicting rebooking, we should be cautious putting excessive weight on guest NPS.” As Colette Alexander pointed out on Twitter, Net Promoter may be a particularly ineffectual approach for companies that broker experiences and transactions between end users.
- Change the Question
If you don’t think the extremely broad question used in Net Promoter is right for your audience, just change it. If, for example, you’re a corporate training company using NPS to assess a class or workshop, you could try asking “how likely are you to recommend that a colleague in your department attend this training?” Or, you could ask “how likely are you to recommend that a manager from another department offer this training to their staff?” Formulate your question based on what you actually want to know. As Erika Hall points out in her fantastic piece “On Surveys,” asking the right question(s) is a huge challenge and takes a lot of time and effort.
- Talk To Your Customers
Net Promoter, like all quantitative models drawing on qualitative data, works best when you actually understand how and why your customers might recommend your product. Focused qualitative research may help you realize that some of your “passives” are actually your biggest advocates, or that many of your “promoters” just wanted to make a survey on a screen go away quickly. You may in fact discover that Net Promoter is a perfect system for your product or service, that it needs a few tweaks to be effective for you, or that it is broadly inapplicable to your work. But in any case, you are committed to understanding your customers’ behavior before you commit to a data model for quantifying that behavior.
Of course, taking any and all of these steps would erode the infallibility of Net Promoter as an objective and impartial set of instructions. Just like their legendary predecessors, data golems can only be destroyed when people are willing to forego both their destructiveness and their usefulness. To defeat them, we must be willing to push back against their certitude, to ask messy qualitative questions, to insist that not everything can be captured by a number or a score.