The Trouble with Lean Startup: User Research is Hard
When I first heard about Lean Startup I was tempted to dismiss it. The tech industry gets excited about movements and philosophies; when they do I tend to run screaming.
Parts of Lean Startup made sense to me, and indeed echoed what UX practitioners have been saying for years. Gather data. Make sure you’re building something your target customers will actually use. Test early and often. So I did something I rarely do: I read the book, Eric Ries’ The Lean Startup.
By and large I like Lean Startup, especially once you recognize how it’s been misunderstood by the industry at large:
- A minimum viable product isn’t a minimal version of your product. Every product idea is based on assumptions. Incorrect assumptions can lead a startup down an unsuccessful path, so a big part of Lean Startup involves identifying assumptions and turning them into testable hypotheses. Your minimum viable product is your tool: the simplest, easiest test you can construct for a particular hypothesis. It may or may not be your actual product, and isn’t intended to be thrown at users wholesale. In particular this means Lean Startup does not advocate releasing half-finished products to the world, nor should it be confused with a “build something fast and see if it sticks” philosophy.
- Lean Startup is not a substitute for product vision. Many have interpreted Lean Startup to mean you needn’t have a big, long-term vision. But the best products do have that vision to guide and constrain their efforts. What you learn along the way will change the path you take and might scrap the vision entirely; but that doesn’t mean you don’t need it.
- Lean Startup eliminates the risk associated with trusting your gut. Not exactly. Lean Startup reduces risk and provides an efficient method for vetting your gut, a.k.a. verifying your hypotheses. But while you can reduce the unpredictable role creativity and intuition play, you can’t completely replace them with a system.
So you’ve got your vision, you’ve created a hypothesis, and you’re ready to craft an MVP. What should it be? There are examples throughout the book: landing pages and sign-up forms to gauge user interest, “concierge” products that use people to simulate algorithms, etc. Put something together, test it, draw conclusions. Except it’s not that simple.
Suppose that in 2001, Apple had asked people whether they’d use an iPod. iTunes was brand new. The iTunes Store didn’t exist yet: digital music was all about tedious CD-ripping or equally tedious searching and downloading from Napster. MP3 players were inexpensive but limited-capacity and hard to use. Understanding the appeal of a $399 iPod required an intuitive grasp of a world that didn’t exist yet. I don’t know how our fictional focus group would’ve reacted, but I can easily see them laughing at the price and asking for longer battery life in their Nomads. Or better skip protection in their Discmen.
You can’t just ask users what they think of your product. Imagining your world with that product in it is hard, even for experienced product designers. Is there a need? Will there be in six months? Does this particular product address the need in a way that recognizes how you’ll actually approach the task in question? Is there a latent human tendency for which a product doesn’t yet exist? Are there details of the final design that will make the difference between a decent product and an addictive one?
Different hypotheses require different MVPs, each carefully constructed to avoid pitfalls. Some examples:
- A survey can give you a window into the user’s mind but everything is filtered through layers of bias. Is the user second-guessing the question? Does he like the idea of himself using the product? Is he answering honestly but without a detailed understanding of his own needs? A little cleverness can give you greater insight by coming at things sideways but it takes care and skill to compose, and can still only illuminate certain types of hypothesis.
- A landing page can answer questions about positioning. Is the product you’ve described something the viewer thinks she wants? Does it fill a need she can identify? But it won’t tell you much about engagement, retention, or usability. And it can be risky in the case of products that create entirely new categories.
- Usability testing — officially in a lab, or unofficially in a cafe — will help you find aspects of your product that are confusing, but won’t tell you whether people will adopt, use, or like it. One researcher I worked with would ask participants to rate a feature on a scale of one to seven — and then throw out the response. The self-reported number wasn’t reliable data; he just wanted to hear them defend their answer so he could understand their perception of the product. As with all these methods, success is highly dependent on how the test is constructed.
I could go on. Indeed, whole books have been written on this and whole careers built on doing it well. My point is not to scare you away from testing — or to reject Lean Startup — but to point out an omission. You can test, and if you can’t afford an expert you can do it yourself. But think carefully, take some time to construct your test, and try to answer questions like these:
- Given your hypothesis, what method might help you get the data you need? What might cut through as much bias as possible?
- Once you’ve chosen a method, what are its limitations? What are all the ways the information you get back might be flawed?
- What “insights” will you need to throw out because they can’t be trusted?
A great test will save you time and money. A poorly-conceived or poorly-constructed test will not only fail to give you the answers you need, it might even send you spinning off in the wrong direction entirely.
Originally published at operationproject.com on November 2, 2012.