Why I Wouldn’t Recommend NPS (aka The Four Lies)
If you’re not familiar, the NPS is a “Score” that is derived by asking users whether or not they would recommend your product or service on a scale of 0–10.


Executives love it, researchers (good ones) hate it. Researchers who advocate it, I do not understand.
I’ve been dealing with NPS for almost a decade across companies and industries, including sitting in a room with the team who invented it asking my questions (and getting my answers), and let me be clear… I am a Detractor.
So now let me explain why, as I debunk the four lies you have to accept for the NPS.
Lie #1 — You only need to ask one question.
Imagine you’re a product manager, and you’re trying to understand the current value of your product and have some sort of growth indicator. Your researcher tells you that you have a choice:
A) Should we ask only one question?
B) Should we ask a few questions?
Seems like a simple choice, right? More questions = more information. And six (or five, or even four) questions isn’t really that cumbersome. You can fit it on a single screen, and you can report it on a single slide.
Mathematically speaking, more questions also provides you a greater predictive model. A regression analysis can determine the strongest relationships between the answers you get and actual growth, and provide you a weighted formula that’s optimized for you.
Consider that that even the people who invented the NPS have stopped telling this particular lie. It started by them saying “Ask the NPS, but also ask them why.” Nowadays with something called “NPS2” they start bringing in other metrics as well.
Lie #2 — This is the only question you need to ask.
OK, mythical Product Manager, maybe you bought the first lie. Maybe you believe that only one question is enough. I don’t understand why, but for argument’s sake… let’s move on. Now you have to ask yourself an important question:
Is “How likely is it that you would recommend our company/product/service to a friend or colleague?” the best single question we can ask to predict growth?
On the surface, it makes sense. If you’re going to recommend, we’ll get more customers, and more customers equates to more growth… right?
But let’s go below the surface a bit and see what we find…
Some users will never recommend any product or service because they lack self-esteem or self-confidence.
“I don’t feel comfortable making recommendations, my opinion isn’t that valuable.”
Some users will never recommend certain products or services because of what it says about them (I personally saw a lot of this with financial products).
“I don’t feel comfortable admitting I use your product.”
Some users will never recommend certain products or services because they don’t want to take responsibility for someone else’s outcomes (I also saw this a lot with financial products).
“I don’t feel comfortable giving other people that type of advice.”
NPS isn’t just asking one question when you get down to it, it’s really asking three:
- How do you feel about the product?
- How do you feel about yourself?
- How do you feel about other people?
It’s these issues and others that have caused market researchers to prioritize other questions that rate things like “overall satisfaction” or “likelihood to continue using” which not only solve this problem but have also been proven time and time again to have a higher predictive value for growth.
Personally speaking, I ran the regression analysis on a product of mine several years ago and of the six questions we asked, “likelihood to recommend” had the lowest accuracy in predicting growth of the set.
Lie #3 — Ask your one question with an eleven-point scale.
So you believed lies #1 and #2. You’re now asking one question, and its the “likelihood to recommend.” Now, how should you ask it? I think we can all agree that a likert scale is the right way to go… but how many points should be on your scale?
Should we use an eleven-point likert scale?
Do any simple google search and you’ll learn some basic best practices for likert scales:
- Always have a neutral midpoint (i.e. odd numbers of possible responses).
- For unipolar ratings five points is the best (i.e. scales that go from “Not at all likely” to “Extremely likely”.
- For bipolar ratings seven points is the best (i.e. scales that go from “Extremely unlikely” to “Extremely likely”)
When these best practices are not followed, the chances of users starting to pick random selections increases.
While NPS guidelines follow the first rule, it completely breaks the second one. If we were following the best practices of using a likert scale, we’d be asking NPS on a scale of 1–5, not 0–10.
Lie #4 — Calculate your final score this way.
So, you’ve bought into the first lie and decided to only ask one question. You’ve believed the second lie and picked the question that has been proven to have less value than others. You then accepted the third lie and asked your question with a methodology that breaks best practices for use of likert scales.
Now you’ve got one big final lie to swallow, ask yourself this question:
Is the best way to calculate my final results to subtract the percentage of everybody who responded 0–6 from the people who answered 9–10?
Let’s apply a simple litmus test. In the world of statistics, have you ever heard of anyone ever using this methodology for calculation for any question other than NPS? Like, if you’ve asked people to rate their agreement with a value proposition statement, or how important a certain feature is?
In other words, if this way of calculating results was valid… wouldn’t it be used to calculate results on other likert scales? Wouldn’t it be taught in collegiate level statistics courses as a valid way of summarizing data?
Because it isn’t.
There’s also the weird anomoly that statistically speaking, a -46 NPS is the most likely result (even distribution between all answers, i.e. 2/11–7/11=-5/11) but according to the people behind NPS most industries average scores 50–100 points higher.


Lastly, this calculation assumes that anyone who responds with a 7 or an 8 has no positive value to the company. Personally speaking, I would kill to have a product that all of my users rated as an 8… even if that gave me an NPS of 0, which according to the chart above would put me right around Time Warner Cable.
In summary…
To believe that NPS provides you any value you have to believe that:
- It’s better to ask one question than many.
- It’s better to ask a question that has been proven to have lower meaning than others.
- It’s better to ask the question using a methodology that breaks best practices.
- It’s better to calculate the score in a way that breaks best practices.
I personally can’t accept this many lies, I would encourage you to reject them as well. Oh, and don’t believe me?
Reichfield (the inventor) has admitted openly that his findings, “[do] not provide proof of a causal connection between NPS and growth.”
Yup, that’s right. While they still advocate usage, the very people who invented NPS admit that it’s value is limited to being something rather than nothing.
I would hope for something better.