Move Slowly and Don’t Break Stuff: Learning from Product Design Failures
When Facebook decided to name its headquarters’ official street address in California’s Menlo Park as 1 Hacker Way, it was a nod to the company’s product development ethos that had driven such incredible, almost unimaginable growth at the social media giant.
As Facebook soared to new heights and became one of the largest tech companies in the world, the company’s mantra of “Move Fast and Break Things” became the rallying cry of product teams all over Silicon Valley and beyond. Once avoided at all costs, failure became an unlikely positive in Silicon Valley culture. Long, tedious development cycles were out; rapidly iterating on rudimentary prototypes and “failing fast” were in.
On the surface, this approach to product development makes sense. Circumventing corporate bureaucracy allows product teams to rapidly try new ideas and quickly identify features that should be given greater priority without getting bogged down.
However, the biggest drawback to this product development ethos is that far too many people — and companies — take the idea of failing fast too literally.
Destigmatizing failure is a positive, but when people become too complacent about failure, it can lead to carelessness, which leads to further failure, which ultimately defeats the goal of improving faster. The “hacker ethos” of moving fast and breaking things works only if product teams approach it thoughtfully and with purpose. It’s vital that product managers understand the pitfalls of failing fast, and there’s a compelling argument to be made that failing fast isn’t always the right approach.
Ready to fail more thoughtfully? Let’s take a look at why you should be.
One of the reasons the hacker way is such an appealing mind-set in product development is that, unfortunately, the odds are stacked against us from the start. The hard truth is that most startups — and products — are doomed to fail. It’s just as grim when you look at the general failure rate of actual products.
In light of these sobering statistics, it’s comforting to think of failure as something to get out of the way before the real work begins; the Silicon Valley interpretation of the late artist and Disney animator Walt Stanchfield’s theory that, “We all have 10,000 bad drawings in us. The sooner we get them out the better.”
But this mind-set can create as many problems as it solves.
Too much failure can do significant damage to your product’s brand, your relationship with current and prospective users, and ultimately, the bottom line of entire companies.
Familiarity Breeds Contempt
One of the most insidious dangers of the hacker way is that too much failure can create complacency among your team. When product teams no longer fear failure, it can directly affect your team’s drive to create high-quality work.
This boils down to the concept of intrinsic and extrinsic reward. Ordinarily, success is rewarded positively, whereas failure is typically considered a negative. When there are few or no consequences of failure, there is little motivation for your team to produce quality products or to run accurate tests and experiments. Over time, this can create a culture of “spaghetti testing” — throwing ideas at the wall until something, anything, sticks.
Another risk of moving fast and breaking things is that it can create a lot of unnecessary work for everyone involved. Experimenting with lots of different ideas simultaneously can be exciting, but it also increases the likelihood of acting on overlapping or conflicting ideas or skewing data by failing to account for the tests of other teams or departments. Perhaps worst of all, you run a significantly higher risk of acting upon flawed or incomplete data, such as false positives that emerge from incomplete testing or half-baked assumptions.
Google’s culture of obsessive testing is a great example of this principle in action. To determine which shade of blue was best-suited to displaying links on search-engine results pages, Google exhaustively tested 41(!) different shades of blue to see which performed best.
Testing so many shades of blue might be seen as being thorough, but in fact, if Google wanted to be 95% confident about its results, it actually had an 88% chance of getting a false positive. This goes to show that more data isn’t always better; by testing so many variables, it’s all too easy to implement ideas that seem positive but actually end up doing more harm than good.
Proving Hypotheses Correctly Takes Time
Another problem with the hacker approach to product development is that proving your ideas rigorously and with confidence takes time. There’s simply no getting around it.
Many companies target the “quick wins” that will allow them to achieve rapid growth, but good ideas take time to test properly. It takes both patience and persistence to discover the true potential of some of the best ideas. Being too quick to embrace failure can result in otherwise great ideas not being given the time they need to demonstrate their potential.
From a product perspective, giving features the time they need to be tested adequately is crucial. When testing new ideas, sample size is critical; the more users that try a new product or feature, the more accurate your data will be. That is because, over time, the random variables that directly influence your ideas tend to stabilize the longer they are tested. In statistics, that is known as the regression toward the mean. For product managers, that means that the longer a feature is tested, the more accurate your understanding of that feature (and how users respond to it) will be.
Conversely, the less time an idea has to be tested, the more likely the performance data will be skewed by those random variables. Maybe a feature is popular solely by virtue of its novelty, or because users haven’t gotten used to it yet and have yet to truly understand its value. Either way, declaring an idea a success or failure on the basis of short-term testing data can be incredibly dangerous in the medium to long term.
Take ecommerce giant Amazon, for example. Amazon has a reputation for its ruthlessly data-driven approach to online sales, but early on in Amazon’s growth, the company realized it had to overcome one of the biggest roadblocks to ecommerce conversions if it was to succeed as a company — shipping costs. Given that shipping costs are one of the most universally loathed aspects of online shopping, Amazon had to essentially reinvent the entire concept of shipping and handling in order to grow. It did so by testing the idea of shipping as an additional service and value-add rather than a necessary evil of online commerce.
After several important experiments, such as offering free shipping on orders of a minimum value, Amazon came up with the idea of bundling free shipping into a membership-based subscription product. Amazon discovered that customers were far more willing to pay up front to guarantee free shipping on their future purchases — and so Amazon went all in on the idea.
That idea became Amazon Prime.
Imagine if the product team tasked with solving a problem as enormous as consumer opposition to shipping costs had decided that minimum-order thresholds were the answer or had only enough time to run a few tests. Sure, it might have worked — but would it have become a service with more than 100M subscribers worldwide worth almost $18B in annual revenue?
Failing Fast Impedes Learning
Failing fast allows teams to move quickly and rapidly iterate on emerging ideas. But one downside of this speed of development is that it can encourage a culture in which product managers and their teams fail to learn from their failures. Over time, this can result in wasted effort and avoidable mistakes.
Rather than trying to “fail fast,” product managers should aim to learn fast instead. Failure is valuable only as a learning tool; if we fail to learn from our mistakes, we’re doomed to repeat them.
It’s also vital to quantify failure as much as possible. If we don’t validate our failures with hard data, the lessons we learn from a failed product or launch cannot help but be entirely subjective. Add in the natural biases we bring with us to work every day and you’ve got a recipe for disaster.
In the context of failing fast and products, there are three cognitive biases that can be particularly dangerous:
- Motivation blindness: A tendency to ignore factors that work against our ideas because of the time and effort we’ve invested in a project or product
- The Bystander Effect: The phenomenon by which we’re much less likely to take action to solve a problem if several people on our team have also noticed the problem
- Pluralistic ignorance: The assumption that everything is fine because nobody else has flagged a problem as a potential issue
To avoid these biases in our analysis, it’s crucial to adhere to a standardized process of learning from failure objectively. This means quantifying our failures so we’re basing decisions on reliable data, rather than flawed assumptions. To do this, implement a system by which failure can be objectively assessed. You should be able to articulate what went wrong, why it went wrong, what impact it had on the product, and what can be done to avoid similar failures in the future.
Remember–learning is an active process, especially when it comes to evaluating failure.
Failure is the Best Teacher
Despite the punk rock attitude of moving fast and breaking things, all agile methodologies, user-centered product design processes, and lean startup frameworks adhere to established scientific methods. In scientific research, failure is an inevitable outcome, but it’s useful only if scientists can learn from those failures. Anything else is just wasted time and effort.
For scientific methodologies to work effectively, ideas must be prototyped carefully; if they are not, we cannot hope to test our underlying assumptions accurately. If we don’t give ourselves time to fail thoughtfully, all we can expect to do is break things.
Originally published at blog.nomnominsights.com on January 31, 2019.