# Microsoft’s metric: Net User Satisfaction

Net User Satisfaction (NSAT) is one metric Microsoft IT uses to evaluate a product. Learn how to calculate it and why it might be better than Net Promoter Score (NPS).

# Learn by example

The nitty gritty of it is a forced choice survey system that gives you a score from 0 to 200, the higher the better. We’ll use the same sample question for the examples below: “**Thinking about your experience in the last 3 months, rate your overall satisfaction with [product]**.”

The user is given 4 choices (with 2 optional for a **total of 6 choices**).

*Very Satisfied*, *Somewhat Satisfied*, *Somewhat Dissatisfied*, and *Very Dissatisfied* are the mandatory.*Don’t Know* and *Not Applicable* are the optional which you should probably include.

We compare the percentage of *Very Satisfied* people to the percentage that are either *Somewhat Dissatisfied* or *Very Dissatisfied*. Let’s find out how.

## Example 1: Everyone loves your product

Sample results — Very Satisfied: 85; Somewhat Satisfied: 0; Somewhat Dissatisfied: 0; Very Dissatisfied: 0; Don’t Know: 15; Not Applicable: 37.

Rule 1: Throw out

Don’t KnowandNot Applicable.

We have a total **valid response rate of 85** (85 + 0 + 0 + 0).

Rule 2: Calculate the

Percentage of Very Satisfied (%VSAT)and thePercentage of Dissatisfied (%DSAT).

85 / 85 = 1 * 100 = **100 = %VSAT**

(0 + 0) / 85 = 0 * 100 = **0 = %DSAT**

Rule 3: Find the difference, add 100, and round.

100 – 0 =100 + 100 = **200, a perfect score**!

The reason 100 is added is to prevent a negative score as you’ll see in the next example.

## Example 2: Everyone hates your product

Sample results — Very Satisfied: 0; Somewhat Satisfied: 0; Somewhat Dissatisfied: 57; Very Dissatisfied: 23; Don’t Know: 9; Not Applicable: 12.

Rule 1 leaves us with a total **valid response rate of 80** (0 + 0 + 57 + 23).

Rule 2 leaves us with **0 = %VSAT** and **100 = %DSAT**.

Rule 3 leaves us with a score of **0, total failure!** (0 – 100 = -100 + 100 = 0)

## Example 3: A realistic example

Sample results — Very Satisfied: 54; Somewhat Satisfied: 28; Somewhat Dissatisfied: 15; Very Dissatisfied: 30; Don’t Know: 22; Not Applicable: 0.

**Rule 1 **applied: 54 + 28 + 15 + 30 = **127 total valid responses**.

**Rule 2**:

%VSAT = 54 / 127 * 100 = **42.52%**%DSAT = (15 + 30) / 127 * 100 =

**35.43%**

Rule 3: 42.52 – 35.43 = 7.09 + 100 = **107**.

# Bringing it together

There is a caveat when publishing NSAT results. You should also include the exact question, response rate, total respondents, sample size, and (optionally) error rate.

Using our 3rd example:

NSAT Score: 107 using question “Thinking about your experience in the last 3 months, rate your overall satisfaction with [product]”.

Response rate of 41% with 149 completed responses out of 360 invited.

So why would you use this?

## Controversy around NPS

There are many arguments surrounding NPS that it is not predictive or useful. I will ignore those and state that NSAT is just another way to measure things. I think the value in NSAT is the simplicity of its survey. With NPS, you are choosing from a scale 0 through 10, whereas with NSAT you are choosing from 4 options. Humans aren’t very good with numbers and I think you’ll see more accurate results with a system like NSAT.

With every metric there are downfalls, whether it be respondents gaming the system or the results creating tunnel vision or something else. I think both of these systems point out that it’s dangerous to ignore your unhappy customers. This is another way to find out exactly who those people are.

Please explore my other stories here. If you’ve liked this, please hit the 👏 or follow. Thanks!