Why do (seemingly) great product ideas fail?
Building a product is a lot like playing with blocks as kids. Back then, we barely had a grasp on the English language, much less physics and engineering. Faced with uncertainty, we started to experiment. This exploration was fun and, because there were no consequences for being wrong, we tried a lot of building designs that didn’t work.
From our trials, we quickly discovered gravity and thus learned a couple of basic tenets. First, the base should be wider than the top. And second, missing pieces make the structure unstable. By sticking to these principles, we could build some pretty great structures. But if we broke those laws, chances were that it’d all come crashing down.
Much like building with blocks, product development has a handful of core principles that we violate at our own peril. The key difference, of course, is the tremendous cost of being wrong. While running 200+ experiments on 30+ product and feature ideas per month, I enabled product teams to experiment and validate great products before building them. By adhering to the following principles, you can quickly recognize when things are going astray, and, more importantly, get them back on track.
Even though we all regularly state the importance of substantiating our product assumptions, it can be astonishingly easy to revert back to opinion-driven decision-making. If the words ‘obvious’ and ‘intuitive’ come up a lot when prioritizing your roadmap, then you might be in this camp.
Building a product without validating your assumptions is a massive risk. If it turns out that you were wrong about any of them, it’s like an entire portion of your product was built on thin air.
Renowned entrepreneur and corporate innovation expert, Steve Blank, made a vivid comparison in the Harvard Business Review:
Business plans rarely survive first contact with customers. As the boxer Mike Tyson once said about his opponents’ prefight strategies: “Everybody has a plan until they get punched in the mouth.”
To avoid this risk, start with the most basic assumptions that you can think of about your users, market, and product. Test these assumptions one by one. As you build up a set of data and insights, you’ll hone your decision making and dramatically increase your product’s chances of success.
Strong products are designed iteratively, from the ground up.
Understandably, most organizations want to move faster. Customer preferences are changing rapidly, and companies like Amazon champion what their CEO, Jeff Bezos, calls ‘high-velocity decision-making.’
In the rush to move faster, it can be appealing to cut corners and only do user research “when there’s time” (hint: it never seems like there’s enough time). Even when companies do make an effort to pencil in research, it’s usually limited to a small subset of the assumptions that actually need testing.
Prioritization is important and should absolutely be a part of your testing process, but not at the expense of thoroughness. Too often, the assumptions that don’t make the cut are the ones where “everyone already knows the answer.” Even when that’s true, and it rarely is, yesterday’s answer may be different than today’s. Incidentally, the ‘obvious’ assumptions that everyone take for granted are usually the ones that, if wrong, spell disaster for the product. Being thorough fills the cracks in your knowledge and sheds light on the places where something unexpected could derail your plans.
Don’t overlook testing something just because you think you know the answer.
Trying to test too much at once may be the most common mistake in user research. When you clump a variety of assumptions together you introduce a range of problems:
- Experiments are more likely to be slow and expensive: more moving pieces means more approvals means more meetings means more time. Additionally, product decisions are almost always time sensitive and they can’t afford to wait for the long lead time of overstuffed research. Over planning research is just as crippling as over planning product development.
- Experiments are more likely to be irrelevant: virtually any test you run will reveal a few places where you have a gap in knowledge or want to ask a question slightly differently. Short test cycles give you a way to discover and fill those gaps quickly. For example, it would be much better to learn that your users don’t understand an acronym that you think is commonplace after a quick test than after a carefully planned out three-month-research-behemoth.
- Experiments are more likely to be misinterpreted: research is relatively straightforward when you test one variable at a time. Once you start adding in additional confounding variables, demographic profiles, and complex branching logic you very quickly need someone with a PhD to crunch the data. User testing shouldn’t need a PhD.
Introducing any single one of the above issues would be cause for concern. Having all of them at the same time practically guarantees poor results and faulty decisions.
To make sure that your tests are well sized, use this general rule of thumb: the assumptions you test should be big enough to be valuable, but small enough that they can be answered with a ‘true’ or ‘false’. If all you learn about your assumption is that it is true or false, would that be enough information to make meaningful progress on your product? If the answer is no, it’s a pretty good sign that you should split your test into smaller pieces that can be validated independently.
For example, if you learn the assumption ‘Users like my new website redesign more than the old one’ is false, it doesn’t give you a tangible step forward. However, reframing that assumption into smaller pieces such as ‘Users are able to find the login button more quickly on my new website design’ leads to clear next steps.
Further, and this is the norm for product managers within large organizations, you need to challenge the culture of ‘perfection’ that stands in the way of iteration.
Ensure that each assumption you test is bite sized and immediately valuable.
Boiling the Ocean
Given the popularity of books like The Lean Startup, many companies have (rightfully) embraced learning as a key goal. Unfortunately, some product teams take this goal too far by trying to test everything under the sun. In doing so they lose sight of the ultimate goal: delivering user-facing value.
When you catch the testing bug it’s pretty easy to come up with a big list of foundational assumptions, all of which seem like they desperately need validating. As you test them, the little dopamine hit you get from learning something new reinforces the experimentation habit and it feels like you’re doing good work.
But exclusively pursuing your curiosity instead of staying focused usually doesn’t end well (at least when aiming for tangible business outcomes). Six weeks later you come up for air and realize that, although you’ve learned a bunch, you haven’t actually made any substantial progress towards your goals.
The simplest way to avoid this trap is to start with an explicitly stated hypothesis, objective, or goal. How you define that goal is up to you but tools like Lean Hypotheses can be a helpful starting point. Once you have a direction in mind, list out the specific assumptions that must be true to hit your objective or validate your hypothesis. If a data point is interesting but doesn’t fall under the umbrella of what you need to know right now, then testing it can wait, simple as that.
Focus your testing on what you need to know — not what you want to know.
It’s natural for products to evolve and grow over time but without due care this growth can cause more problems than it solves. Many companies have found themselves managing a once pristine core product that has been overburdened with years of bolt-on features and expansions.
The resulting bloated product may be successful for a little while, but if core parts of a product perch over unvalidated assumptions, it’s only a matter of time until something goes wrong.
To avoid this problem, stop and consider the underlying assumptions baked into any new product line or feature request. If you’re shipping an iterative product change, then there’s a strong chance that you’ve already tested many of them. For the assumptions you haven’t tested, figure out which are the most critical and start working through them one by one. If a critical assumption for the new feature to succeed turns out to be false then you can adapt or remove the feature proactively before a big investment of time. If your assumptions hold up under testing then you’ve build a strong foundation of learning to support your next round of development.
Fight feature bloat and add stability by basing what you build on a strong set of validated assumptions.
What distinguishes average product teams from amazing ones isn’t the industry they work in or the product they work on. It’s all about execution. The best product teams are the ones who uncover market opportunities and who can navigate the most efficient, fastest route to get there. The best way to shortcut that path is by iteratively testing key assumptions as quickly as possible.
Once top-performing teams decide on a new direction to explore, they identify the underlying assumptions, prioritize them, and test them one by one. As tests reveal unexpected insights, they adjust their plans until they settle on the best approach to their objective.
Of course, the market is always changing, so during development and even after launch, they keep testing to make sure they’re still on the optimal path. They know all too well that if they don’t stay on top of what their users need, a competitor will.
When you get down to it, most of the problems we have when testing products are mistakes that our block-playing toddler selves would be more than happy to point out. Constructions (and products) based on wobbly bases don’t usually work out too well but by starting small, building iteratively, and keeping a strong foundation we can build some pretty amazing things.
Actionable user insights before, during, and after product development ensures great product ideas turn into great product successes.
Originally published at alphahq.com on May 24, 2017.