Spend failure wisely
A common language for evaluating risk in software design and development
Failure is an evergreen topic in tech, with every thought leader touting the benefits of taking risks and failing early in order to learn and iterate quickly. When people laud failure, they’re really talking about experimentation and iteration — exploring approaches for the sake of learning more about their audiences or developing new tools. But not every risk is worth taking. Time constraints, budgets, stakeholders, and customers can make failure a limited resource. The onus is on teams to make sure that they spend their failure wisely.
As one example of failure that could have been avoided, Gap.com redesigned their mobile shopping cart experience in Winter 2018. Visually, it was a much-needed overhaul, but it created serious usability challenges. In particular, it was too easy to disregard the address choice and send packages to the wrong destination. Having recently switched jobs, I sent a big order to my previous employer’s office by accident. When I went back to the site to delete the old address and choose a different default, I found I couldn’t do it anywhere, neither desktop nor mobile.
I recently noticed that I can now edit my addresses on the mobile site; I had 17 going all the way back to 2006. However, the checkout design remains essentially the same. A month ago, a friend of mine made the same mistake I did, sending several packages to an old address in a different state. The order couldn’t be cancelled, and she had to ask the new homeowner to ship the package to her new address.
Without interviewing the team at Gap, it’s hard to know what situation precipitated this design. Still, a human-centered designer could have spotted most of these failure points with a few minutes of critique and proposed a better experience. While it might have cost more to change the UI, or might have compromised the visual aesthetic, better usability would have reduced the burden of incorrect deliveries and angry customers, likely resulting in a lower overall cost and an improved reputation. Teams can reduce these avoidable mistakes by learning to fail wisely.
Failing wisely means learning efficiently
Failing wisely doesn’t just mean learning from failure, it also means taking easy steps to ensure that failure is useful. It means choosing risks that help you learn things you don’t already know. Obviously, few would willingly promote risks like the Gap example that almost certainly lead to trouble. Sometimes teams take such risks unintentionally when people in key roles are missing or when communication isn’t clear across roles. It can also happen when the team prioritizes a flashy UI or other less valuable aims over user satisfaction.
If teams want to fail wisely, they have to have the right people at the table, and every member has to be accountable for creating a satisfying user experience (hat tip to Alan Cooper). Teams must use their collective experience and knowledge to reduce obvious failure points, so that an approach can fail in new and surprising ways, or so that it can fail in areas where there’s known uncertainty. While it’s not always possible to achieve, striving to avoid failing in predictable ways improves a team’s chances of learning the most critical keys to improving the product. It allows teams to spend more of their effort on the highest risks.
This highlights the challenge of communication across roles. A common language could help team members frame their perspectives meaningfully in areas of disagreement, misunderstanding, and uncertainty. Cross-training can also help, but sometimes it’s not possible or time-friendly. Therefore, I share my own internal categories for considering risk and failure: predictable, expected, and surprising. These three categories have grown out of the human-centered design process and my background in designing physical and digital interactive experiences.
Three types of failure
You know it will fail and how
This is needless waste — a failure that one or more team members can identify before it happens. They can anticipate the consequences based on specialized knowledge and past experience. Predictable failure is not useful for learning because it can prevent or obscure more useful learnings. Teams don’t strive to fail this way, but it can happen when key expertise is missing, as a result of misunderstandings, or when the team isn’t aligned on user satisfaction. Teams can also choose to implement a predictable failure in a compromise if they judge the impact minimal, acceptable, or unavoidable.
You know it will fail but not how
This is purposeful failure — a failure that one or more team members expects to happen because they knew they didn’t know enough about it to form a reasonable hypothesis. Expected failure is useful because it can help define areas of uncertainty and clarify the hypothesis. The only way to shed light on an expected failure is to pursue further discovery or try out prototype solutions with the target audience.
You had no idea it would fail
This is a failure that the team didn’t expect and couldn’t predict. Surprising failure can upend the whole hypothesis and be frustrating, but it’s necessary for progress. It’s useful because it can help answer questions the team didn’t know to ask and identify areas that need further attention.
The language here (e.g., ‘know’ it will fail) is purposely strong — predictable failures should be really basic and obvious from at least one role’s perspective. Making the language strong separates it cleanly from expected failure. If something is even a little ambiguous, then it should go in the “expected” category. Though there may be rare examples of predictable failures not failing, it seldom makes sense to use a predictable failure as a starting point.
Failure in experience design
Predictable failure can be of any size, from confusing microcopy to whole sections of the product that don’t make sense or work properly. Designers specialize in recognizing these types of failures when it comes to human behavior and contextual knowledge. Once the failure is recognized, the team can move onto more interesting questions, such as those shown in the following table.
It’s not going to be possible to eliminate all predictable failures, and in some cases the team may have little choice due to the circumstances. In these cases, gauging the size and potential impact of these failures can help teams navigate the decision-making process and make better-informed tradeoffs. Identifying the failure points also allows teams to account for them when gathering feedback. Otherwise, the predictable failure could be confused with other types of failure.
For example, failing to make an interaction clear could lead people to miss the feature entirely. It might be hard to tell whether they missed the feature because of the poor interaction design (predictable), or because the word on the button is the wrong choice (expected). Without a clear understanding of the cause of failure, it’s hard to find a solution.
In another example, if the user misses a link because it’s improperly styled (predictable), they might miss the page with content that you wanted feedback on (expected). Acknowledging the predictable failure allows the designer to compensate and deliberately seek valuable feedback on the content.
In some cases, an experience with predictable failures may be a necessary midpoint on the way to a more refined solution. However, that midpoint shouldn’t be confused with an opportunity to validate predictable failures.
Using this failure framework, teams can anticipate and describe the predictable and expected failure risks from each role’s perspective. Focusing up-front efforts on addressing predictable failures allows teams to use more energy in the design and development process on illuminating the expected and addressing the surprising failures that arise.
There will be some overlap across roles, but in general, designers will evaluate based on what they know about human behavior, perception, and the people using the product. Product owners will evaluate based on what they know about the business priorities and target outcomes. Developers will evaluate based on their knowledge of the technology stack and system architecture.
This is where a common language can come in handy — a solution that seems simple from a development or product standpoint may violate basic usability heuristics (essentially a list of evidence-based predictable failures) and cause the designer to raise a red flag. If the other members of the team aren’t familiar with the research behind heuristics, they might categorize the risk as an expected or a surprising failure (let’s wait and see). Being able to describe the approach as a predictable failure (let’s not reinvent the wheel) circumvents the waste and allows the team to refocus on what they can do to fail more wisely.