Grab Feature: Call Ratings

Masturah M.
4 min readSep 20, 2023

Role Lead Designer
Deliverable In-app Feature
Launched in Singapore, Malaysia, Cambodia, Indonesia, Myanmar, the Philippines, Thailand and Vietnam
Impact Understanding the issues users face with calls and saving manpower cost

We don’t know what causes calls to fail. We had to narrow down the possibilities to ensure our engineers fixed the correct issues.

Current experience of post-call survey

Previously, we had a binary Good/ Bad choice. While this was not a bad experience, we couldn’t aggregate and track the average call quality, so we could only react when there was a massive uptick in ‘bad’ call quality. We also hypothesized that a lot of users dismissed this survey because the call was neither exceptionally good nor bad.

A 5-star rating would help us better understand, over some time, what the average looks like.

Final design

By giving users options on how to improve the audio, we can understand where the problem lies, and how to fix it quickly.

1 to 5 star rating

From 1 star to 5 stars, the copy was considered carefully. After some rounds of usability testing and weighing how users see how “bad” 1 star would be vs how “good” 5 stars would be, we came up with “Terrible”, “Bad”, “Okay”, “Clear audio” and “Crystal clear”.

On our end, we decided that anything less than “Clear audio” or 4 stars would mean that something has gone wrong. For that reason, 1–3 stars would show “What went wrong with this call?” and 4 stars prompt users “How can we improve the quality? This nuance in copy conveys our understanding of the bad quality of the call to the users.

Changing ratings

Keeping the stars and the reasons on the same page has several benefits

  1. Reminds users what they rated, so they know how to answer the next section
  2. Allows users to change their rating while keeping their selection

Explorations

Before getting to the final solution (MVP), here are some of the explorations along the way.

❌ Free Textfield: While it would be really good to understand exactly what happened during the call, this would take more man-hours to sort through the feedback.
❌ Full screen survey: From previous learnings from other teams, it seems like users are more likely to dropoff if tasks are in-full screen if they themselves did not intend to do the task. As this is an automatic popup (and not a CTA), we decided that showing them where they came from/ are going back to is a better experience. Showing them where they initially were would give them the space to think whether it is urgent to go back to the task at hand, or they have a few seconds to spare for this survey.

I explored giving users options to select multiple selections to the reasons for the rating. However, as you can see on the right, if the reasons are too long, either the bubbles will be cut off or it would sit in a different line.
❌ Since the content should be edited or added on, this is not a scalable solution.
❌ Selecting multiple reasons would muddle the initial (MVP’s) results.

❌ I also explored a solution where the bubbles would sit in separate lines, but after usability testing, it was not clear to users that these were multi-select options.
❌ Visually, these bubbles would work better centre-aligned, but this would go against our design system, and would look awkward if some were single words and others were phrases.

Explorations for Future iterations

We always design for both MVP and the future.

If we need more feedback on the quality, we can always include another option, ‘Others’, and include a text field.

Originally published at https://www.missmasturah.com.

--

--

Masturah M.

A user’s fiercest advocate, helping them fall in love with products and businesses.