How to avoid “hypothesis churn” when searching for problems worth solving

Don’t let analysis paralysis keep you from finding customer problems.

Most startup advisors will tell you to Love The Problem. It’s important, because without a good understanding of the problem you will struggle with eventual product/market fit.

The first step of the Lean Process methodology is finding Problem/Solution fit; breaking this down further, the first step is actually ‘Problem Discovery’.

From what I can tell, the typical process goes like this:

Generic problem statement → Hypotheses → Experiments/Interviews → Insights

Focus on questions first

Hypotheses are an important part of Problem Discovery. However, in my recent experience, I don’t think you want to start with this.

When trying to validate a problem, think of questions first. Don’t try to build falsifiable hypotheses just yet. You want questions that will help you have a conversation.

Hypotheses are like a bullseye — they help you validate or invalidate a very specific statement. While validated hypotheses are incredibly important for establishing ‘why’ your startup exists, it does not help you understand the ‘edges’ of the problem. It also avoids the messy work of building personas. You want your problem interviews to take calculated tangents; that is, go off course sometimes so you can build a solid understanding of all the nuances around your problem.

In my recent experience, establishing hypotheses up-front caused us to focus too narrowly in our initial interviews. We had so many things we wanted to learn. We quickly learned that not all of our hypotheses were relevant to the folks we interviewed, but we had to interview them to better understand organizational dynamics, who influences who, and periphery challenges. This is especially important for B2B product managers where complex sales are a common occurrence.

Another benefit of taking detours: it makes you a more well-rounded problem expert. It also helps you develop intuition. You’ll rely on this intuition later when the data isn’t clear, hard to get, or you need to make quick decisions.

Use insights to build hypotheses

After deciding that hypotheses were too heavy weight for us, we switched gears and focused on asking simple questions about the problem. We used ‘TED’ questions (tell me, explain to me, describe for me) to get people to talk. With this information, we generated insights and met a few times each week to share what we had learned. Here’s what we did:

  1. Focused on longer form interviews. We didn’t like spending only 5 mins with our target users.
  2. Captured all interview questions in a word document so we could pattern match different insights.
  3. Record the name and contact information for each person you interview. You will need to follow-up with them later in the problem and solution validation stages. This also gives you early indication if their problem is urgent and big enough, and seeking new solutions.

Tip: Try to use the ‘buddy system’. Have your co-founder join you in problem interviews so someone is there to take notes. (The ‘divide-and-conquer’ strategy for problem validation can create gaps later.)

Once we uncovered enough insights, we started enumerating key assumptions turned them into falsifiable hypotheses related to the problem area. This was informed by our problem interviews. Our interviews uncovered some additional problems — and additional personas — that we wanted to investigate further. As a result, insights informed our hypotheses as well as target persona.

In a nutshell, here’s the process we followed:

Round 1: Problem statement → Interviews (30x) → Insights

Round 2: Hypotheses → Focused Experiments (10x) → Early Adopter Pains

  • We interviewed about 30 people we initially thought had the problem
  • We learned about 10 of those 30 were the ideal early adopters, and then focused our hypothesis validation on that population.

We also found that some of our most valuable interviews were *not* with people who had the most urgent problem. In a complex environment, like a large company, some people with the pain may *not* be responsible for the solution.

Our “focused experiments” were hypotheses that had specific, measurable goals. For example, we would say “70% of our target early adopter will do X” (or something like that). We would then ask an ‘off-the-wall’ question to our target early adopter to get a strong positive or negative reaction. This off-the-wall question is obviously specific to your problem space, but it’s purpose is important: You don’t want a lot of gray area when validating hypotheses.

For us, the outcome of this was a set of clearly defined pain points for a specific early adopter segment.

Will this work for everyone?

I’m not sure. I think the more complex your problem space, the more time up-front time you should spend on questions rather than forming hypotheses.

Some counter-arguments to my approach:

  1. You can always turn your questions into hypotheses.

While true, you may discover that some of your hypotheses may be irrelevant in your interviews. For example, in your first round of discussions, did those interviewees have sufficient demographics to make them relevant? If you get a bunch of neutral responses to your initial hypotheses, you have either 1) targeted the wrong early adopter (which is very possible in complex B2B sales), or 2) haven’t setup your experiment in such a way to generate a strong signal.

2. You can always add hypotheses later.

This is also true, but this also leads back to my original ‘churn’ argument. You want to optimize the time spent going back to your interviewees; your time and their time are valuable — and you’ll need their help later when you begin MVP experimentation.

What’s next?

After spending time in problem discovery, you’ll move onto solution validation. This phase will feel a lot more like problem discovery AND solution validation combined — it’s OK to go back and uncover additional problems and insights if you think that’s important. But ultimately you want to have a prioritized list of problems to go after you don’t become too distracted.

Also, when validating your solution, you’ll begin to rely on MVPs. MVPs are not really products, just ways to conduct quick experiments so you can see if you’re on the right track. Experiments and hypotheses are like peanut butter and jelly. I strongly believe that hypotheses, in general, are more effective in the Solution Validation phase.

If you want to read more about Solution Interviews, check out my blog post here: