5 Ways to Capture User Sentiments and Issues to Improve Products

Thitapa Shinaprayoon
AirAsia MOVE Tech Blog
6 min readJun 17, 2022

--

by Thitapa Shinaprayoon, Thanesh Ravindran, Ayush Thakur, Shien Sim, Eric Kosasih

Image credit: Flickr

Flight booking hits a roadblock? Your lunch arrives late? So many issues to fix, so little time!

How do we prioritise our precious resources for the highest impact fixes?

We have built a mechanism to detect these user issues, quantify them, and prioritise them in product development.

The Product Design team has been capturing user feedback with this feedback widget and analysing feedback from flight, food delivery, and other products quarterly to identify opportunities to improve user experience and conversion.

Users can submit their feedback about their experience using the product

The Product Design team uses a combination of user journeys, data analytics, and quantitative data to show what, when, and how a user issue occurs. But this piecemeal approach requires a lot of effort, coordination and has various challenges.

Challenges

  1. Large amount of qualitative data to be cleaned and analysed
  2. Difficult to quantify user issues and identify the prevalent issues
  3. Difficult to quantify the impact of user issues on user experience
  4. Difficult to understand context and root cause of the user issues
  5. Difficult to combine all of these data points to prioritise the product roadmaps

With these challenges to tackle, we embarked on a major collaboration between data scientists, researchers, designers, writers, and product managers. Here are the six things we did!

1. Automate data processing to clean and analyse a large amount of qualitative data

One of the first challenges we tackled was processing the large amount of user feedback more efficiently.

Prior to collaborating with the Data team, the Product Design team manually cleaned and tagged the comments into categories (functionality, reliability, usability, convenience, and others) to know which issue we should attend first.

Categories used to tag comments for prioritisation

This manual tagging was very laborious. There could be 7,000 to 30,000 comments per quarter. So, the team could only tag 1,000 comments per quarter.

To automate this process, the Data team uses these tagged comments as labelled data to build the categorisation model. Pre-processing of the text data is done for both labelled and unlabelled data. Unlabelled data here refers to the new comments users submit.

The model uses an embedding module called Universal Sentence Encoder and on a high level, the encoder converts a given sentence (a comment in this case) to a 512-dimensional sentence embedding to determine the cosine similarity between labelled and unlabelled embeddings.

In layman terms, we use past feedback categorisation to tag if any incoming feedback is related to functionality, reliability, usability, or convenience automatically.

Overview of Feedback Categorisation Model

This automation helped us cut down 4 days of data cleaning, manual tagging, and categorising topic of issues for each quarterly usability report.

2. Use RAKE to quantify user issues and identify prevalent issues

While automatic data processing has been helpful tremendously, a week is spent to manually tag common specific issues as use cases to further understand how and when the user issues occurred before prioritising the resources to improve the user experience and conversion.

So, the Data team uses the machine learning model to efficiently identify:

  • What kind of issues users have while booking flights or ordering food delivery
  • How big of an issue is (how many users are facing this issue)

The Data team uses RAKE (Rapid Automatic Keyword Extraction) to extract keywords from comments to identify and quantify the keywords users talked about among positive and negative feedback.

Overview of Keywords & Emoji extraction model

Then, we visualise these keywords in word clouds and charts. This approach helps us identify prevalent issues and narrows down specific use cases from negative and positive feedback more efficiently.

Word cloud from keywords
Chart to quantify the issues

Now, we’re working on a more insightful approach to tag user feedback and app reviews into meaningful topics, instead of keyword extractions. So, the topics better reflect context and different touch points of the user experience across travel, e-commerce, and operations.

3. Use emojis to understand how users really feel about products

Along the way, we also discover that users like to use emojis to express their experiences and emotions when they write feedback, especially among mobile users. Sometimes, there are no words, just 😡 or 👍.

We realised that we could be ignoring these user sentiments that could be a great indicator for user experience. So, the Data team came up with an approach to translate emojis into words to analyse the sentiments and visualise the emojis like word clouds.

Word cloud to capture user sentiments
Chart to quantify sentiments

4. Combine experience rating with common keywords to quantify the impact of user issues on user experience

To further understand the impact of user issues on user experience, we map the more frequently mentioned keywords with the average experience rating of the keywords. This approach helps us identify prevalent user issues that are associated with poor user experience.

Impact of issues on user experience

We will be able to apply this framework to gain even more insights after we improve the specific issue categorisation to apply across product verticals at airasia. These user insights can help product managers, researchers, designers, engineers, customer service, and operations tremendously to identify the opportunities to improve the products and services based on the challenges airasia users face.

5. Contextualize user issues to improve user experience and conversion

Knowing what the issues are and how many users encounter the issues are not enough. We need to understand how the issues occur and what causes them to improve the user experience and conversion.

Contextualise issues with user stories

To contextualize the problem, we have manually looked up the user flow and watched recordings of how users encounter the issues. But it’s time consuming and difficult to find the root causes of technical issues (API errors, booking system errors, etc.) because we need to use different data sources and tools to describe the context in which the issues occurred and identify the root causes.

The Data, Product, and Product Design teams work closely together to make sure we’re capturing the right data to contextualize the user feedback and app review. And, we’re improving how to connect user feedback with user journey and other forms of tracking to give more context to how issues occurred and detect events that lead to poor user experience.

Expand the framework to other product verticals and keep improving the workflow

We’re applying this user feedback framework to other digital products such as food delivery, grocery delivery, and ride-hailing so other products can use this tool to help them identify opportunities to improve user experience.

Researchers and designers can reduce the time in manual categorisation. Instead, they can focus on contextualising user issues, prioritising, and designing better product experiences for users.

airasia super app

We’re hiring!

Does solving problems like the above excite you? Join us to help build airasia products and super app as data scientists, researchers, designers, writers, and more at Airasia Careers.

--

--