Using a Kano/Qualitative Hybrid to Make Better Use of Customer Insights (part 2 of a series on the Kano Method)

Flipping a coin after (disappointing) testing? Add in a qualitative component and let users decide

(Previously posted in the projekt202 blog.)

In an earlier post in this series, I talked about how customer studies using the Kano Model provide a structured approach to user feedback on potential features. Five categories of features were introduced.

The 5 Kano Categories of Features

In today’s post I’m using a case study from an engagement at projekt202 that dealt with features for potential inclusion in a mobile healthcare app. My team ran into an interesting problem when we saw the initial results.

After using the Kano method to test 21 features picked by stakeholders as interesting possibilities, my team found that three were negative features: one Reverse (always best to exclude these) and two Indifferent, which we also excluded as they had no other value to the business. Among the remaining features there was just a single Must-Have. No features ranked as One-Dimensional (features where the better it is executed, the more the user is satisfied). The other 17 features all ranked as Attractive.

Here’s the problem: On the one hand, Attractive features are the “frosting features” — the things that don’t disappoint when excluded but create a potential for delight when present. Finding 17 “maybe include these” was not helpful.

It turns out that in this particular project the glut of Attractive features was the product of working in a space which is currently so murky, borderline hostile, in terms of access to knowledge that healthcare consumers just plain expect to be left in the dark and under served.

The features that offered opportunities to learn about common healthcare practices and provide transparency were almost astonishing concepts. They were delightful, but according to the Kano methodology they also wouldn’t be missed. Excluding them all, however, would leave a thin offering. We had to prioritize.

Deciding which of those light-shedding Attractive features to include should just be a coin toss, right?

Burning money with bad decisions.

That’s a poor strategy for allocating design and development resources. Additionally, as we discussed in part 1 of this series, the reason you run a Kano study in the first place is to stop using politics (or other poor decision making methods) in feature decisions. Avoid a scenario where politics and coin flipping come into play by adding a qualitative discussion component to your Kano (or any!) user test.

In our one-on-one, moderated Kano sessions we organized the features into domains based on what they primarily addressed: Doctor Recommendations, Costs and Medical Bills, Advice and Education, and The Mobile Experience. Before getting into the features of a category we situated the participants in the realities of today with a discussion including things like: How do you accomplish this now? What would you do today if…?

This provided us with valuable data to return to when we needed to make actionable recommendations on attractive features. Using this data we identified several features ranked as Attractive which we determined were best excluded, for now, based on factors such as the effectiveness of participants’ current strategies. For instance, if participants had an existing method to accomplish what an attractive feature did, and they really liked and felt in control with that method, we recommended not allocating resources to that feature right now, since there isn’t enough need behind it to motivate users to change their current behavior. They were solutions without problems.

Other attractive features brought clear and immediate value or offered an exceptionally delightful experience. As mentioned, the discussion portion of the session revealed how beaten down and accepting participants were of the lack of cost transparency in health care. Features which offered assistance understanding pricing logically should have ranked as Must-Haves (features that people expect), but didn’t because people don’t expect anything better — yet. However, any feature bringing price transparency would disrupt the current system so we recommended that attractive features addressing costs be included in the MVP launch.

Using qualitative data helped us better understand some of the more puzzling Kano results (transparency isn’t a basic requirement?) and allowed us to make recommendations on which features would combine into a first launch with a meaningful impact.

Participants can have a hard time articulating what they really do and how they’d really feel about something in their actual environment — this is not a replacement for high-quality in-context design research. A Kano/Qualitative hybrid approach does however offer a more informed way of dealing with potential features than a simple survey, a politically charged corporate argument, or a coin toss.

Have you ever tried using a Kano method test? In what other ways have you navigated tricky political waters using data from customers?


Originally published at projekt202.com.

***

Kelly Moran utilizes an innate curiosity and unceasing desire to ask “why” to understand how people use products and services to accomplish their goals — whether those goals be work or play. Find her other writing at: https://medium.com/@Kel_Moran