Mixed methods research for more effective feature prioritization

Sutong
IBM Design
Published in
8 min readJun 13, 2022

How we combined in-depth user interviews with the Kano model survey to decide what is next to build or not to build

Photo by Jason Goodman on Unsplash

To build or not build, that is the question

In the continuous product development life cycle, the product team faces the constant challenge of identifying and aligning on what features to build or not to build next.

Different stakeholders come in with different understandings of features they think would be the most important to change or build. Design, development, and business all have their own considerations so the decisions need to balance a multitude of things such as user needs, technology resources, constraints, and business goals.

That leads us to the ultimate question: How do we deal with this challenge in product development and help our product team to prioritize features and determine what to build or not to build next?

That’s where you come in with your user experience researcher toolkit to bring an evidence-based approach and learn along with your stakeholders and bring clarity to what direction to take.

Combining qualitative methods with quantitative methods for feature prioritization

To do so, there are many feature prioritization frameworks out there organizing the process and quantifying user needs like the Kano model or opportunity scoring. Such frameworks rely heavily on quantitative analysis and are good at revealing what people think and do along with how much they think and do things in certain ways.

However, the quantitative approach of such feature prioritization surveys does not shed light on the whys behind the data. Qualitative methods like user interviews and contextual inquiry can help the team learn and understand the reasons and meaning behind users’ perceptions and behaviors and gain insights into the underlying user needs.

So, let’s take the best of both worlds with a mixed-methods approach: combining qualitative and quantitative methods. This type of triangulation will help fill in the gaps of each approach. It helps you cross-check if the quantitative data and qualitative insights correspond to each other and how they relate so that you can sew them together to tell a richer and fuller story behind the data and insights.

Triangulating in-depth user interview insights with Kano model survey data

There are various qualitative and quantitative methods at your disposal. In this article, I will talk about combining user interviews with the Kano model survey for the specific project we had in the IBM Watson Media team.

First, we conducted stakeholder interviews and collected the input, assumptions, and questions from various stakeholders. By doing this, we were able to clarify and develop the research goals and questions which were about understanding user needs around a particular area in our product, finding out what features were important or unimportant for users, and why. We did this so that we could use the insights to determine if features 1, 2, and 3 should be built or not.

Given the research goals and questions, as well as considering the time and resources we had, we needed to select generative research methods that would allow us to know and understand users’ perceptions and attitudes as well as their current habits and experience when using products like ours.

triangulation of research methods
Image: triangulation of qualitative data with quantitative data through mixed methods research

Interviewing users is a great generative research method during the product strategy phase or in the continuous product discovery. It allows probing for an in-depth understanding of users’ perceptions, motivations, experiences, routines, and habits, which can provide key insights to guide important decisions such as feature prioritizations and feature ideas.

Kano model, on the other hand, is an effective and efficient way to measure the impact of the presence and absence of product features on users’ satisfaction. It also takes into account how the industry and competitors’ evolution, or lack of evolution, would affect users’ perceptions of a certain feature as a basic feature or a delightful one.

In the world of product development, to stay competitive and increase sales, it is easy to sink into feature creep. Kano model helps sort through the tons of feature requests or ideas to identify the optimal ones that would bring the most impact on users’ satisfaction and hence ROI. It is a powerful way to quantify user expectations to help the product team develop an experience-based roadmap rather than a feature-based roadmap.

Doing a Kano model survey following generative research like interviewing users also greatly strengthens the product team’s confidence in the research insights and empathy with the users.

Now I will talk about each of the two methods more in detail.

User interviews

Once the methods are chosen, the next question is whether to start with the Kano model survey first or to start with user interviews first. That depends on if you already have a clear and full list of features to be studied for prioritization.

The risk of running a survey first is that the data we get may not be that useful or complete if we don’t know what are the right questions to ask. That is why starting with user interviews can help get a feel for the problem space, and build a sound base for developing a clear and full list of features to be included and validated through the Kano model survey.

Therefore, for this specific research project, we first conducted 10 in-depth user interviews to understand how people use the product currently and learn about other ways they solve the problem the product is intended for. From the interviews, we were able to not only gain deeper insights into users’ needs to inform design and development decisions but also have a better understanding of what questions to ask and which features should be included in the Kano model survey.

It turned out that some features were not in discussion for the product team originally. Interviewing users helped us surface such features that turned out to be more pertinent to users’ experience and satisfaction than the ones the product team was debating. We included the features uncovered through the interviews in the Kano model survey to further validate with about 120 participants from the target audience.

Kano model

The Kano model is a framework developed by Japanese professor Noriaki Kano in the 1980s. The framework uses two measures of 1) functionality provided and 2) customer satisfaction to categorize how customers feel about various features in a given product.

Functionality going from None to Best
Image from The Complete Guide to Kano Model by Daniel Zacarias
Customer satisfaction going from Frustrated to Delighted
Image from The Complete Guide to Kano Model by Daniel Zacarias

The satisfaction level goes from being frustrated to being delighted; the functionality level goes from none to best considering how much a feature is being invested in or how well it is being implemented. Putting these two dimensions of satisfaction and functionality together, you will get five categories of how customers feel about certain product features: Must-be, Performance, Attractive, Indifferent, and Reverse.

Five categories of Kano Model
Image from The Complete Guide to Kano Model by Daniel Zacarias
  • Must-be: Features that are taken for granted, whose presence would not induce delight but the absence would result in frustrations
  • Performance: Features that enhance the product performance, and the more functionality we provide, the more satisfied customers will be
  • Attractive: Features that are not expected by customers. The presence of such features can increase satisfaction while their absence does not cause dissatisfaction or rejection of the product
  • Indifferent: Features whose presence or absence does not make a difference in customers’ reaction to the product and hence investing in providing functionalities of such features will not increase satisfaction
  • Reverse: Features that should be gotten rid of because their presence causes dissatisfaction

Following the Kano model, a set of two questions are asked in the survey about each of the features. The standardized set of questions approaches each feature from two opposing scenarios: how would you feel if you can ___ or have the ability to ___ v.s. how would you feel if you cannot __ or do not have the ability to ___?

With this model, we wanted to see how the features from our list would fall into which of the above-mentioned five categories and also if the survey results correspond with the narratives from the user interviews.

After collecting about 120 responses and using existing calculation templates/platforms, we applied continuous and discrete analyses to analyze and visualize the data. The results showed us customer perceptions of how they feel about having or not having each of the features and the results were also in line with the insights from user interviews.

Telling the same story by merging the two methods

With qualitative insights from the interviews, we were able to explain the reasons behind the survey result and develop a deeper understanding of user needs and how to make the feature prioritization decisions.

With the quantitative survey, we were able to reach a large sample size in a very short time; and with the survey result, we were able to have quantifiable feedback to back up the interview insights and prioritization decisions. It is powerful for strengthening the confidence in the outcomes.

Combining the two methodologies and showing both “human thoughts” and “numbers” from the research, the story of what the features mean to users and how we should prioritize them was empathized with more easily and deeply by the product team.

Next time you help your product team with research to prioritize features and determine what to build next, consider a mixed-method approach by collecting both qualitative and quantitative data and see if it helps your team fill in the gaps and reveal new opportunities.

Reference:

Special Thanks to Joan Haggarty, Joshua Fan, Ahyana Riley, Hossein Raspberry for their valuable feedback and support in the editing of this article.

Sutong Liu is a user experience researcher at IBM. The above article is personal and does not necessarily represent IBM’s positions, strategies or opinions.

--

--