Inside Microsoft, our CEO Satya Nadella has been a strong advocate for empathy — empathy for other people generally, empathy for other employees, and particularly empathy for customers. I suspect every product manager would say both that they’re customer-focused (or “customer-obsessed”) and that they could do better. On the Developer Tools team, we asked ourselves how we could create systems for listening to customers that would both enable us to listen to more points of view, and to listen better.

We had a wide variety of challenges when we started to really dig into how to improve. We have a lot of developer-customers and they have very diverse wants and needs — developers in large highly-regulated industries maintaining Windows Forms applications have some pretty different needs from developers in startups using TypeScript in Visual Studio Code to build Node.js microservices. Some customers are pretty deep into their DevOps journey (a good book on this is Accelerate: The Science of Lean Software and DevOps) and others are still heavily waterfall. Most are in-between. Some customers are students, some are coding a couple of hours a week.

We needed some way to reason over the kinds of customers we have and the kinds of customers we are trying to attract. We needed a data collection and viewing systems that enabled us to reason about the telemetry we see from customers who opt into our customer experience improvement program. We needed a way to take customer feedback at scale. And we needed a way to talk with customers so that we could listen to customers without biasing their answers.

Normally this kind of research is something done by a trained research team — they set up surveys, for example, and do focus groups and customer interviews. We have a trained research team, but we were trying to figure out how we could scale these techniques to the entire product management organization.

Thus was born what we’ve started to call the customer-driven engineering framework.

There’s more about this framework in the book written by Travis Lowdermilk and Jessica Rich called “The Customer-Driven Playbook.” It reflects a shift in both the mindset and the day-to-day language that now permeates our organization. For example, we appreciate that our ideas are just assumptions and, therefore, we talk less about our ideas than about our hypotheses. Ideas have weight and people can fall in love with ideas. But hypotheses are just things you can test — you can have hypotheses about whether a particular kind of customer exists, about what their key problems might be, and about the kinds of solutions they might want. Changing the language and encouraging the creating and testing of many hypotheses really improved our ability to move from opinion-based product planning to collaboratively identifying opportunities.

Another change was in how we would do customer interviews. Most product managers, when asked to learn about what customers want, might run a survey or might ask a few customers — we did the same thing (and we still do). We wanted, however, to get better at how we talked with customers. To that purpose, we’ve focused a lot on the research team helping the product managers create their discussion guides. Discussion guides help make sure that the product manager is both guiding the conversation enough, but also not biasing the answers. It’s a tricky balance — most customers we speak with want to help — they want to answer our questions. What the interviewer needs to do is to guide by asking a lot of open-ended questions — lots of “tell me more” and “why is that” and “how do you solve that problem today” types of questions. For some, that’s pretty unnatural, since the answers may not help drive toward the solution that the interviewer wants to see “win.” But, of course, research isn’t about winning, it’s about learning.

In Cindy Alvarez’s book Lean Customer Development, she says that her experience is that, in about 20 minutes, you can have a good customer interview from which you can learn a lot, and that after about 20 customer interviews you’ll probably have gotten to the point where you’re hearing the same things consistently. Our experience has been varied with these rules of thumb — sometimes we need up to three hours to deeply understand the customer situation, and sometimes we’ve had to do 50+ interviews to refine our hypotheses to the point where we know enough to start to really look at an opportunity. But the key point — 20 minutes, 20 interviews — coupled with the ability to create a good, consistent set of hypotheses and a good discussion guide, can lower the bar to live conversations with humans (which I think of as different from surveys and focus groups).

There’s also a lot we’ve learned about how to source customers, and maybe I’ll write more about that later.