Product Discovery Methods for a Product Delivery World

Part 1 of 2: Collecting product feedback without sacrificing product discovery.

Travis Lowdermilk
UXR @ Microsoft
6 min readApr 27, 2023

--

This is part 1 of a 2-part blog series. This post discusses the foundation for a novel approach to concept testing, while part 2 walks readers through the methodology step-by-step.

Lately I’ve been stuck in a paradox.

With the explosion of interest in our partnership with Open A.I. and the potential for language models to empower our customers to achieve more, our product teams have been hard at work creating hundreds of potential concepts that need customer feedback.

The rough and tumble world of applied research is having to juggle the responsibility of collecting deep and meaningful data from our customers, but also keep up with the break-neck speed of our product teams’ ingenuity and creativity. It’s a delicate balancing act and, at times, it feels like an unsolvable paradox.

UX researchers do not have the luxury of isolating themselves for deep work, but their teams still rely on them to provide deep insights; especially in areas where supportive academic research isn’t available. In short, UX practitioners are the MacGyvers of the research world. We cobble together what we can and, when we get it right, we’re able to help our product teams overcome seemingly insurmountable obstacles.

Thankfully, I work with an incredible team of UX researchers in the Developer Division. Over the past few months, we’ve been experimenting with an approach of concept value testing that collects foundational insights from the customer, alongside their critical feedback on our product teams’ concepts.

This methodology is an expansion of our Lean concept value test approach that we have written about in the Customer-Driven Playbook and teach in our Customer-Driven Engineering workshops.

Lean concept value testing is great for helping empower our product teams to have the right conversations about potential concepts, but it can be a bit light when it comes to digging deeper into discovery learning. Our goal was to improve our approach by adding additional questions to our interview protocol. This new version of our concept value test can be completed in a 60-to-90-minute interview.

I’ll provide a complete rundown of the entire protocol in part 2, but before I do, let’s discuss 3 points you will need to understand to be successful with this method.

3 Important Points About This Method

1. Identify the job-to-be-done

Before talking with customers, you’ll want to help your product team identify the job that their concept seeks to address. Getting direct feedback on their conceptual ideas is important, but so is learning how customers approach the job in the absence of the concept. We must understand the “push and pull” forces at work to better predict customer behavior when they encounter your concept.

Alan Klement outlines these “forces of progress” and has written about two major groups of forces:

Demand generation forces: These are forces that either push (e.g., situational changes, regulatory forces, etc.) or pull (e.g., attraction, preference, excitement, etc.) customers to switch to using your products.

Demand reduction forces: These are forces that cause your customers to resist your products. Inertia (e.g., existing investments, established processes, etc.) or Anxiety (e.g., concerns about learning something new) can cause customers to resist change or adoption of your offerings.

A diagram that depicts the forces on customer decisions and behaviors.
 
There are two lines. One is along the top, pointing from left to right. This line is labelled “demand generation.” The other line is along the bottom, pointing from right to left. That line is labelled “demand reduction”.
Inspired by the Progress Making Forces — by Bob Moesta of the Re-Wired Group

These forces are hard to identify in customers if all we’re asking them for is feedback on our concepts.

Asking questions about how customers approach the job before showing your concept can help you illuminate these forces. It also gives your customers a moment to reflect on how they approach the work before you show them a potentially new way to do the job.

2. Help customers articulate their desired outcome(s)

It’s not uncommon for product teams to have different opinions on how their concept address customers’ unmet needs. To ensure that their assumptions are tested, we should be helping our customers articulate what it looks like when the job is done well. Our customers’ desired outcomes help us go beyond understanding the job the customer wants done, it elevates our understanding to include what the customer aspires to achieve.

I’ve also observed that customers are more thoughtful in their critique of concepts, whenever they’ve had a chance to explore their desired outcomes for the job first.

That being said, it can be hard to get customers to articulate their desired outcomes. We need to help them identify them so we can capture them accurately.

In his work on Outcome Driven Innovation, Ulwick suggests that a desired outcome can be stated using a “need statement”.

Direction + Metric + Object of Control + Contextual Clarifier

Direction: Minimize, reduce, increase, enhance, etc.
Metric: Time, error, money, stress, reliability, etc.
Object of Control: Outcome the customer would like to achieve.
Contextual Clarifier: The unique situation or context that informs the customer’s desired outcome.

Customers are more thoughtful in their critique of concepts, whenever they’ve had a chance to explore their desired outcomes for the job first.

Spoiler alert: Customers aren’t going to articulate their desired outcomes cleanly enough for you to simply fill out this formula. You must use your expert interviewing skills to unearth them. Sometimes, I’ll spend as much as 15–25 minutes establishing the customer’s desired outcomes, before they have seen a single concept. Typically, I’ll repeat back to the customer what I heard, sometimes using Ulwick’s framing.

Example: “So, Terry what I’m hearing you explain is that when you’re debugging an unhandled exception, your goal is to minimize the chance that the bug will occur in the future, and to do that you’re expecting that the tools will maximize your ability to identify the root cause of the issue. Is that correct?”

Once I get agreement from the customer, I ask them to think about those desired outcomes as base criteria when they evaluate the concept.

3. Consider using “soft quant” questioning

Finally, many of the questions in this method are illustrated over a 5-point Likert Scale.

These scores should not be used as a shortcut or statistical analysis of your concept’s performance.

Your product team may be tempted to calculate your concept’s “scores” and ignore the important qualitative feedback the customer is giving you.

Now, I get it. From a pure research perspective, some may feel like this is confusing a quantitative technique for a qualitative gain. In a sense, it does feel like we’re breaking a design pattern here.

The Likert scales can be removed from every question in our template, as they have been written to be open-ended as well. So, feel free to do that if this soft quant technique is a bridge too far.

However, I prefer using the scales for 2 primary reasons.

  1. They help me ground the customer’s response: I find that customers often have a hard time communicating the strength of their response. Likert scales help customers quantify their response, but more importantly it gives me an opportunity to dig deeper. (e.g., “tell me more about why it’s a 2 for you.”) The score is a smaller data point to help illustrate the temperature of the customer’s response. So, be sure to ask lots of follow-up questions to ensure you’ve captured exactly why the customer has given the score they have.
  2. They help in synthesizing qualitative data: I use the scores as a starting point in my analysis of my qualitative data. For example, I may start by looking at my transcripts for customers that indicated a lower score and compare their responses to the responses from customers whose scores were higher.
Many of the questions in this method use a Likert scale to ground the customer’s response. The use of the scales is entirely optional.

Now that we understand some of the key points of this method, I’ll draw your attention to part 2 of blog series, where we explore the protocol step-by-step.

--

--