We all love it when the quant and qual align, but what about those other times, when they seem at odds? For example: the surveys are in, the clickstream data has been analyzed, and you’re feeling confident. Then as you compare notes with your teammates, you realize that the recommended next steps based on UX research and data science are poised to send the business in two very different directions.
This was the challenge we found ourselves working to resolve as a user researcher and data scientist with Customer Insights Research supporting the same product team at Microsoft. What seemed like a conflict ended up leading us to deeper insights, a closer working relationship, and a better outcome for our customers—but getting there was a multi-step process.
Step 1. Confront your data discrepancy
Our product team was sunsetting the older version of an app in favor of one that provides accessibility for all users. To help our stakeholders understand what our customers needed in the new version, researchers had conducted user studies, interviews, and surveys as well as analyzing in-app feedback. Caryn, a researcher, was listening to what our customers were saying: they told us too many of the features they enjoyed in the older app were missing from the new app.
The user research recommendation, based on this analysis? Fill the feature gaps from the older app or customers will not transition over.
Meanwhile, Sera, a data scientist, conducted a cohort analysis with clickstream data to understand what our customers were doing in the older version of the app and how that impacted their transition to the new version. Based on the qualitative feedback, she expected to see customers who used features only available in the older app abandoning the new app. But the analysis showed that they weren’t.
The data science recommendation at this stage? Since customer retention in the new app doesn’t correlate with feature use in the older app, focus on other vital parts of the user journey to help people transition.
Research and data science had arrived at opposing suggestions. Now what?
Step 2. Resist the urge to champion your own data
At this stage, it could have been easy to each double down on our opposing viewpoints. If we’d presented the results, asking our general program manager to choose between recommendations, at least one of us would have the satisfaction of knowing we influenced the product. But how could our stakeholders and leaders be confident they were making the best data-driven decision, if we forced them to choose between quant and qual?
In a way, mixed-methods research is an exercise in getting comfortable with conflict and finding reconciliation instead of a “winner.” Happily, we each realized this and resisted the urge to champion our own perspective. We asked for the time we needed to investigate further, and our product team accommodated.
Step 3. Dive into your data dissonance
Next, we brainstormed different ways to collect and view our data that might resolve the seeming discrepancy.
Diving into the qual, Caryn investigated which feature gaps were causing people to say they wouldn’t use the new version of the app. Some of the feedback was about features that were built into the new app.
Customers perceived them to be missing because they were in different locations or had different user-interface flows.
Diving into the quant, Sera took a closer look at who was transitioning completely. She created a new category of retention in the cohort analysis — new-app-only retention. If a customer used both the new app and the old app, they were retained in the new app, but they were not retained in only the new app. She found that although customers did use the new app, many would only do so while simultaneously using the old one. Furthermore, usage of specific features in the older app did correlate to whether customers used the new app alone.
Step 4. Synthesize data to align on a new recommendation
As we synthesized these new insights, we found some overlap between Caryn’s list of perceived and actual feature gaps and Sera’s list of features that correlated with an incomplete transition. The overlap revealed which features were making it most difficult for customers to transition. Some of these were really missing in the new app, while others were perceived to be missing.
Putting the pieces together, it looked like our customers would need the following to make a complete transition:
1. In-app onboarding help to find features that had been moved
2. A larger set of features
Only by providing both resolutions could we offer our customers a seamless, complete transition to the new app. Interestingly, our final recommendations weren’t so very different than the original “conflicting” ones — only now we understood the subtleties of how they fit together.
When we shared our new recommendations with the product team, it turned out that some of the actual feature gaps we’d prioritized were impossible to bridge in the new platform. Thanks in part to our data, our stakeholders decided to continue supporting the older version of the app.
Step 5. Seek further opportunities to collaborate
We know just how disconcerting it can be when data sets compete. But as data and customer stewards, it’s our job to use these moments as opportunities to play off each other and get at deeper insights. Facing our discomfort head on helped us build a better partnership.
Mixed-methods research has become a necessity for our UX work. It’s the friendship between qualitative and quantitative research that ultimately gives us the best chance to do what’s right for our customers.