Case Study: Our chatbot is forgetting our users!
Background: What is Metabuddy?
Metabuddy is an AI tutor for language learning. What differentiates the Metabuddy app from other language learning apps is that we provide a hyper-personalized experience for our users — Metabuddy remembers who you are, what you like/dislike, and more!
This translates into a magical learning experience that is tailored to each individual user, leading to faster learning rates regardless of language skill level.
The Problem: Our chatbot is forgetting our users 🚨
One of the problems that our team had was that some times, Metabuddy would lose some of the information that our user was providing. For our user, this meant that the experience turned from hyper-personalized to a generic one.
This was something that we had to fix quickly. Originally, our approach was to try keyword searches or general elastic searches. This yielded partial results that we looked at one by one. Soon, however, we realized that users reacted very differently to conversations where Metabuddy would forget user information. Some users would explicitly mention something along the lines of “you asked me this question already.” Others would be more aggressive and say things like “you are a stupid chatbot.”
We needed an easier way to find out and analyze the sub-types of conversational experiences where Metabuddy was forgetting our users.
The Solution: Leveraging Align AI for analysis
As an early Align AI design partner, our team connected our data source to the Align AI dashboard via the SDK documents provided (the integration took less than 5 minutes).
With our conversations ingested, we started off with a simple search query “Find me sessions where the chatbot forgot our users.”
The results provided us with a great place to start our analysis.
To understand what types of sub-cases existed under the umbrella of “Metabuddy forgetting users,” we leveraged Columbus Copilot to help with our analysis.
Columbus Copilot provided a quick breakdown of what sub-cases existed. One of the sub-cases found by Columbus included conversations where Metabuddy would ask the user the same question multiple times. This repetition was an indication that Metabuddy had forgotten some of the information that the user had provided!
With the breakdown of sub-cases from Columbus, we could now take a deeper look at sub-optimal user experiences to make product improvements.
Next steps & Key learnings
We delivered our findings to the development team, who made changes to the prompt and orchestration of our AI models to lower similar cases by a whopping 93% (as of October 6th, 2023).
A couple of key learnings using Align AI:
- Speed & ease of search: With the natural language search, we were able to really quickly find the conversations that we were looking for.
- Time saved from Columbus Copilot: With the help of Columbus Copilot, we were able to save a lot of time breaking down all the conversations into sub-cases.
- Analysis required 0 developer knowledge: The analysis of our conversational data occurred without any developers or data analysts present, enabling anyone on our team to do the analysis themselves.
Moving forward, we’re confident that leveraging Align AI will allow us to gain a deeper understanding of how our users are interacting with Metabuddy.
Looking for more resources?
Read one of the guides below:
📄 Align AI 101: Getting started with AI-native data analysis →
📄 Align AI vs. the traditional product analytics stack →
link: https://tryalign.ai/