Seeking the Minimal Path to Value
You cannot dig a hole in a different place by digging the same hole deeper.
— Edward de Bono
In a famous experiment, people in favor of or opposed to capital punishment read and then assessed two studies, one that supported their position and one that undermined it. Both groups rated the study that supported their view as methodologically superior and felt the two studies together favored their stance (Lord et al., 1979). Both of the studies were fake. Though amusing, this shows that two groups who disagree can look at the exact same information and come away disagreeing even more. Because of cognitive bias, data do not always help generate alignment or drive better decision making.
About 2000 years ago, the author of 2 Timothy wrote, “having itching ears, they will pile up teachers to suit their own likings.” Today, psychologists refer to this as the “confirmation bias.” We tend to selectively entertain evidence that supports our views. We often reach the wrong conclusions because we latch onto the first explanation for something that makes sense to us and then defend it against later, better ones. As a result, we fail to generate and explore alternatives (Dawes, 2001). In this post we’re going to discuss a four-step process that can help counter such traps.
A “frame” is an invisible filter through which you interpret information. It’s your unstated beliefs that form the context of your decisions and interactions. In decision quality, “frame blindness” occurs when assumptions have not been explored and there is no alignment around the perspective and scope of the decision to be made. Edward de Bono (1970), the creator of lateral thinking, argues we’re accustomed to what he calls “vertical thinking,” where we focus on developing an idea by excluding others.
Lateral thinking, on the other hand, focuses on restructuring what we already think we know in order to open new possibilities. Before we even start considering alternative solutions to a problem we first need to generate and explore alternative framings of the problem itself (image adapted from de Bono, 1970).
Exploring a problem almost always entails interviewing people, such as users, customers, and/or stakeholders. An useful technique here is 5 Whys, from Toyota (Ohno, 1988). Take the example of user or customer requests. These are typically pretty surface-level, often requests for features or solutions. Unfortunately, the lack of a preferred solution does not indicate a real problem in the environment. What’s the underlying problem? What’s its scope? What assumptions are being made?
To apply 5 Whys effectively, don’t take the name too literally. The point is not to ask “Why?” five times but to gently invite someone to iterate on their answers and to learn about the larger context and underlying needs. Sometimes revisiting an answer two or three times does the trick just fine. Also, asking “Why?” can be annoying. Instead try saying something like, “Tell me more about that,” or, “What would that achieve for you,” or, “What obstacles would this remove?” In the example below the “interviewer” is essentially asking “Why?” without actually saying it.
Capturing and challenging assumptions involves interviews, but also observing people perform key tasks, and/or testing rapid prototypes. Challenging assumptions can also fuel innovation. As de Bono stresses, you can’t loosen a pattern without finding where it’s tied down. Dropping an assumption has an untethering effect, which opens possible new directions (image adapted from de Bono, 1970). Shifting frames or escaping assumptions leads to a different way of seeing. If temporary, de Bono says, the result is often humor. If permanent, the result is insight.
A useful practice related to 5 Whys is de Bono’s Why Technique, sometimes called “assumption smashing” (see Cave, 1996). The idea is to utilize escape techniques, like assumption reversals (making an assumption explicitly state the opposite) or assumption dropping (eliminating an assumption altogether), to see how it changes the overall picture. Unlike 5 Whys, which is about drilling down, this is about going sideways. For instance, in the 5 Whys example above, why does the dashboard need to be customizable or self-service? That’s an assumption. What happens if you drop it? Why does the team need to present to staff with graphs from this dashboard? That’s an assumption. What if you reverse it? What if they didn’t need to present to staff? What might that look like?
When exploring alternatives, there needs to be some sort of yardstick to compare them against. When you’re focusing on your output (the work you do) and not on whether that output is any good, you can end up sounding like the two ladies below.
Part of a problem frame is alignment on purpose. What are you actually trying to achieve? What is a good target outcome? An outcome is a measurable change in someone’s behavior or sentiment, effected to achieve a goal. Calling this out spotlights that, ultimately, the only way to create business value is to change someone’s behavior (Adzic, 2012). An added benefit is that behavior is observable, which means it’s always measurable.
Creating outcome targets can be challenging. People tend to get hung up on whether they’re “right.” The idea isn’t to be “right,” however, but to draw a line in the sand (Croll & Yoskovitz, 2013). You need some sort of gold standard or litmus test. By drawing a line in the sand and comparing your results against it, you enable yourself to learn your way forward. As Jeff Sussna points out, “value” is subjective and ever-evolving. Aligning on a target outcome and measure is a good way to establish a good enough and agreed upon definition of value. It creates a concrete context around the alternative solution ideas competing for prioritization (image adapted from Sierra, 2015).
Let’s say there’s a poor, hapless Product Owner, a Mr. Murgatroyd. He’s expected to take requests from customers or users and add them to a backlog. This is treated as a commitment. To be a real PO, and not a POINO (Product Owner in Name Only), Murgatroyd should stop assuming the requested feature or change is in fact the most valuable, feasible, and desirable thing to do. Instead he should focus on the underlying problem, its frame, and the assumptions being made. What is the request meant to achieve? What’s the intended outcome?
This helps take the focus off our egos. When discovery and alignment utilize design thinking techniques like affinitizing and stack ranking, this helps us avoid the advocacy trap. We’re all prone to overconfidence, and our confidence is often not predictive of accuracy. Outcomes can help with this. Out of 100 ideas, say our Mr. Murgatroyd has his team try 24 of them. Of these 24, eight achieve the intended outcome. Murgatroyd hopefully realizes he achieved the intended outcome 33% of the time, but he actually doesn’t know how good he is at picking ideas — he doesn’t have data for the ideas his team didn’t try. But he shouldn’t be worrying about his idea-picking prowess. He should instead keep the focus on discovering minimal paths to achieving target outcomes.
In their book Decisive (2013), Chip and Dan Heath discuss the fascinating research of Paul Nutt, who, they say, “may know more than anyone alive about how managers make decisions.” In a study of 168 executive decisions (often made by a CEO or COO), Nutt found that 71% were framed as whether-or-not decisions. A “whether-or-not” decision is the binary choice of, “Either we do X or we do not do X,” without generating and exploring a single alternative.
Whether-or-not decisions, Nutt has found, fail 52% of the time, whereas decisions where two or more alternatives are compared only fail 32% of the time (see Nutt, 1996; 1993). Stated another way, executives only realized they should explore alternatives to a single whether-or-not decision about 30% of the time. Interestingly, another researcher found this is on par with the decision-making skill of teenagers (see Fischhoff, 1996). Whether CEO or hormonal teen, this teaches us one of the main lessons of decision quality: Any decision cannot be better than the best available alternative.
If you don’t generate and explore alternatives, you don’t know how good or bad a decision is. Someone requests Feature X. You do some detective work and figure out it’s meant to achieve Outcome Y. You do some research interviews, maybe employ 5 Whys or the Why Technique discussed above. Maybe you explore the issue using Clean Language (more on that in another post.) You take what is learned back to your team and generate alternative ways of achieving Outcome Y. What’s the larger context? There will often be a more direct path to value than the original request. Maybe you discover if you redesign the workflow the outcome is achieved without software enabling anything. (Software enablement is itself a solution assumption.) Whatever is decided on, if it’s cheaper and faster than building what’s requested, but still achieves Outcome Y, you just created value.
If you need to set a rule, do it. For instance, Esther Derby suggests that for any problem you always generate and explore at least three possible ways to solve it (Derby, 2015). As they say in chess, “When you see a good move, look for a better one.” Set up three alternatives and compare and contrast them on desirability, viability, and feasibility. Interview some people. Rapidly prototype what’s important and test it.
As we saw above, sometimes the minimal path to value will be something different from what’s requested. Sometimes it will be something very similar, just smaller. Consider slicing and cost of delay. Cost of delay is the opportunity cost of not doing something, per some unit of time. If you estimate that an idea will save $100k per week, then you should treat a 10-week delay in implementing it as “costing” $1m. If you’re ready to move on this item but someone’s holding it up and you can’t get on their calendar for three weeks, with this estimate in hand you can let them know their delay has a “price tag” of $300k.
In general, if you take a work item and find a way to slice it into something smaller or quicker with the same cost of delay, you just created value. Imagine an outcome has an estimated cost of delay of $500k per week, and that a solution that’s being considered to achieve it has an estimated duration of six months. If another solution that achieves the same outcome could be implemented within three months, that would save a total of $6m. Don’t forget to consider alternative outcomes. (In fact you may just want to estimate cost of delay of outcomes, not features.) If feature, change, or story requests are coming in that are not tied to an outcome, by all means affinitize and see which ones map to your target outcome. Put the others in a parking lot, but consider what other outcomes these other items might help achieve. They may be of far more value. If they can be sliced into something that can be done quickly, they might generate big value fast.
In thinking about minimal paths, you should not assume that value is correlated with how long it takes to do something. Joshua Arnold recently shared the example of a small change to CSS that took minutes to do but had a cost of delay of £900k ($1.25m) per week. The thing is, he argues, this isn’t uncommon. We tend to assume that only “big, strategic initiatives” have a large cost of delay, when it’s often small items hiding in backlogs that have the largest value. As he puts it, it’s important to hunt for such “tiny wins.” Just because something isn’t a strategic priority, just because it isn’t a big, funded initiative, doesn’t mean big value isn’t leaking away by not doing it.
In exploring problem frames, capturing assumptions, and generating alternative ideas, don’t fall into the trap of thinking these activities aren’t valuable and that all you need is “action.” Action without exploring alternatives is a high-waste proposition. Consider how much time people spend building out the details of a Gantt chart. This is all vertical thinking. An entire Gantt chart is often a single approach to a problem. It may not be a good approach, and it may not be the right problem to solve.
Finding the minimal path to value requires agility, and agility requires you to avoid the fallacy of “honoring sunk cost.” As Edward de Bono says, “You cannot dig a hole in a different place by digging the same hole deeper.”
Adzic, G. (2012). Impact mapping: making a big impact with software products and projects. UK: Provoking Thoughts Limited.
Cave, C. (1996). Assumption smashing. Optusnet. Retrieved on March 15, 2018 from: http://members.optusnet.com.au/charles57/Creative/Techniques/assump.htm.
Dawes, R. (2001). Everyday irrationality: how pseudo-scientists, lunatics, and the rest of us systematically fail to think rationally. Boulder, CO: Westview.
de Bono, E. (1970). Lateral thinking: creativity step by step. NY: Harper & Row, Publishers.
Derby, E. (2015). Seven agile best practices. Esther Derby Associates, Inc. Retrieved on March 13, 2018 from: http://www.estherderby.com/2015/10/seven-agile-best-practices.html.
Fischhoff, B. (1996). “The real world: What good is it?” Organizational Behavior and Human Decision Processes, 65: 232–48.
Heath, C., & Heath, D. (2013). Decisive: how to make better choices in life and work. New York: Random House, Inc.
Lord, C. G., Ross, L., & Lepper, M. R. (1979). Biased assimilation and attitude polarization: the effects of prior theories on subsequently considered evidence. Journal of Personality and Social Psychology, 37, 2098–2109.
Nutt, P. C. (1999). Surprising but true: Half the decisions in organizations fail. Academy of Management Executive, 13, 75–90.
Nutt, P. C. (1993). The identification of solution ideas during organizational decision making. Management Science, 39, 1071–85.
Ohno, T. (1988). Toyota Production System: beyond large-scale production. Portland, OR: Productivity, Inc.
Sierra, K. (2015). BADASS: making users awesome. Sebastopol, CA: O’Reilly Media, Inc.