UI uplift of the in-app NPS survey design for Moneyview

The best way to acquire new customers is through recommendations. But how can you check your company’s ability to win recommendations?

Elizabeth Thomas
7 min readJan 19, 2024

Net Promoter Score (NPS) is a management tool that can gauge a firm’s customer relationships’ loyalty. It is an alternative to traditional customer satisfaction research and claims to be correlated with revenue growth.

NPS is calculated using the answer to a single question, using a 0–10 scale: 0–6 being the detractors, 7–8 being the passives, 9–10 being the promoters

What’s the problem?

Upon reviewing our analytics as well as multiple other sources, we found that many users do not understand what the NPS survey rating is

The following screens are the previous designs

The current designs for NPS

Insights from the existing design for the NPS scale

  • Users are confused on a 0–10 scale. Sometimes users rate 1 thinking it is the best rating for apps
  • The scale did not feel continuous. The breakage into two lines gave a confused understanding
  • The screen lacked any kind of attention that showed it was a rating screen
  • There was a lack of visual interaction (emotions) showing the users' selection
  • The CTA was always enabled, which caused users to share blank responses ( as they were not reading ).

Defining business and design requirements

After getting a download from the Product managers, my design manager clarified the goals of the company leadership, requirements and metrics of the project.

Main Design Goals:

Enhancing Clarity: To help users understand this screen as a rating screen

Minimising Ambiguity: Address the confusion associated with 0/1 ratings, ensuring users accurately evaluate apps without unintentional mis-ratings

Reducing errors: Providing better clarity on what to write in the additional comment box

Asking for confirmation: if someone gives us a detractor score, we need to ask if they are sure with a pop-up — showing it's a bad score. We will not force them to rate us as good, but we want them to be sure if they are rating us as bad.

Adding a callback option to understand their problem: When someone gives us a low score (0–2), we aim to understand their problems better so we can prevent similar issues in the future.

Business goals or success metrics:

Improving our NPS score: Current NPS is 52–54, we need to take it to >60

Improving NPS attempts: To Increase in several people giving us NPS every day.

Reduction in the number of people who give good feedback but low ratings (currently around 25–30 per day)

Research

I started by researching the competitors, including direct and indirect ones such as Jupiter, Navi, Zomato, Swiggy, Amazon, Booking.com, etc. This was to learn more about the industry’s best practices.

Competitor analysis
Competitor analysis

Exploration and Ideation

From research and after discussions with the team, I found the following to keep in mind while designing

  1. 0 = Least Likely | 10 = Most Likely
  2. No Pre-selection — To avoid a bias for the user
  3. Use slightly bigger UI elements = Leads to an easy selection of scores.
  4. Should visually look good
  5. Make sure it looks like a rating/graded scale.
  6. Should have a mechanism to show visual feedback based on the rating given (+ve / -ve)
  7. The scale should feel tappable
  8. The previous user data showed that there were many incorrect ratings due to text-heavy messages
  9. The user data showed that visual elements are more helpful than written ones. There was a need for a visual banner to show it was a rating screen, to make it more relatable to the users (competitive analysis)

Potential solution

  1. Slider — with a slider the the user can move vertically forward and backward to give a rating. Along with them, emojis come up to show the visual feedback
Different stages for slider

This solution was explored but concluded with not being very user-friendly as the dragging might miss out on the accuracy of the score. Additionaly when compared to other options, tapping is quier than dragging.

2. The three-point feedback — where 0–6 (detractor) ratings are clubbed in the first emoji, 7–8 (passives) are clubbed in the second emoji and 9–10 (promoters) are shown by the third emoji

3 point

3. The 11-point feedback — scale of 0 to 10 is shown for the users to click on and the colour changes as the user taps on the call to show the visual feedback

11 point feedback

4. Vertical scale — as the user scrolls up and down the rating changes with colour

Horizontal scroll

This solution was not considered to align with the market standards for NPS and there can be issues with the accuracy with the scores

Solutions

The product managers were inclined towards the 11-point feedback solution as it is a standardized NPS feedback survey and I thought the 3-point feedback scale was more effective as it is easily understood and impactful

Hence we moved forward in making two versions of the NPS survey for A/B testing

Final screens

I came up with 2 sets of screens and the team decided to get them tested with the actual users.

The solutions included

- A banner to indicate the screen was an NPS screen

- Emojies as the visual interaction for user feedback

- Providing reasons for users to select in case of detractors (1–6) and passives (7–8)

-High tappable affordance for ratings

1. The 11 pointer scale

2. The 3-point scale

We ran an A/B test of 11pt (80%) and 3pt(20%) scale.

Results

After the 3-point and 11-point NPS scale A/B test, we have observed that

  • users are giving us a better rating for the 3-point scale (easily understood by users)
  • few users are also selecting “1” as a rating but giving us good feedback
  • Users were confused with the text on the screen as the results showed names (assuming they are mentioning to whom they will refer to the app)

Below are the following issues in detail. The last few months' data gave us the below results

Results from A/B testing

Even though 3pt has a better rating, the PM and the team were not inclined towards it as it is not a standardised NPS survey.

Additionally, the scores 1–6 for detractors are clubbed in the first option, scores 7–8 are clubbed in the second option and scores 9–10 are clubbed in the third option. This led to an incorrect way of calculating NPS as it was difficult to get the exact scores (dark pattern) leading us to induce inaccurate ratings to calculate NPS.

When we called ~500 people, 45% of them had given incorrect ratings due to the lack of understanding and re-rated us to become promoters. Most of them were below the undergraduate level in education.

Iteration

I decided to make a few more changes to the 11-point survey screen

I replaced the circles with stars to give better visual feedback. This way the user can associate that more stars means better ratings along with the scores

Updated the title for the screen to give a clear understanding as the users were not relating to the questions (due to most of the users educational background)

Added marking of scores (0/10) to provide better visual feedback of what the users are rating

Reduced the visual clutter by showing the additional comments box only when others option is selected

Option for the users to receive a call back from Moneyview for assistance

Final UI

Learnings

Users hate completing surveys as it is an extra step before closing the app. Some basic things to be followed while designing surveys can be

  • Keep it short
  • Make it about their experience
  • Give them visual feedback to be user-friendly

Wrapping it UP!

I appreciate you sticking with me and reading this far. I hope it was helpful.

I’m super grateful to the wonderful team at Moneyview that helped me throughout this project!

Feel free to reach out to me for any questions about this case study on LinkedIn

--

--