The Best Book for Decision-Making

Sources of Power By Gary Klein

Rating: 10/10

Best Line: Most of the time when we have to make difficult choices, we do not fully understand what we want to accomplish.

Most Important Line: It is impossible to free ourselves from inconsistency, belief perseverance, and memory compartmentalization. Actually, there is one way to ensure that people find inconsistencies and discover connections: by keeping the number of beliefs small. If we could struggle through life with only a few beliefs, perhaps fewer than ten, then we might have a chance to purge inconsistencies.

We’re the only creatures on this planet that face hard decisions. Why? Because we’re the only ones who think about the future. We can predict. Also, hard decisions emerge because we are capable of wanting multiple things. A dog isn’t this way. A dog wants to eat. A dog wants to play. I, however, have often wanted to play and eat at the same time. Your dog eats what you give them. I carefully choose a meal based on what I think is going to give me maximum enjoyment. Sushi not steak. Indian not Thai. But it must be inexpensive. So I want it to not only be enjoyable but also cheap. Again, a dog doesn’t suffer this dilemma.

Sources of Power is written by Gary Klein, a cognitive scientist, and it is a book that elevates the writing about decision-making to something more than the tricks and theatrics of popular research. Hoary chestnuts like the Marshmallow Test are important insights into the power of delayed gratification. Or not. It’s probably just a good, memorable story that has been perpetuated over time. So it goes with a lot of popularized research.

But not here. In Gary Klein’s book, the broader understanding of how we decide is rooted in the context of the decider–namely, their cognitive and experiential capabilities. Which is to say that decision-making comes from many sources and you can improve by drawing from the right sources in the way you think. This book helps to show us how. My only concern in sharing this review is that I won’t be able to share everything. To do so would require me to literally write the book. It’s that rich. It’s an incredible work for anyone making decisions (i.e., everyone).

For example, consider this very important line from the book:

“We once believed that novices impulsively jumped at the first option they could think of, whereas experts carefully deliberated about the merits of different courses of action. Now it seemed that it was the experts who could generate a single course of action while novices needed to compare different approaches.”

This is a vital insight. Novices don’t just pick something. Especially in today’s world. This informs service design very deeply. Consider Amazon. Their rating/review system is catnip to novices. And we’re all novices when we’re shopping for new things we’ve never owned before. Coffee? I know what I want. I’m an expert in that. I’ve bought it hundreds of times. I can decide immediately. But a kayak? I must read eighteen reviews from fifteen websites and pour through the Amazon listings. I’m not sure what to do because I’ve never bought a kayak before. Oh, and maybe I shouldn’t. Maybe I should just rent one instead. No, I’ll buy. But only spend this much…

Genuine experts don’t do this. That doesn’t mean they make better decisions. It just means they can make a decision more quickly based on a clearer picture of what they want. They understand what works for them, they see it a little more objectively, and they understand the factors that weigh into it.

There’s a meta skill in this. If experts can make decisions quicker and with more consistency and reliability, perhaps there’s a pattern to their thinking regardless of their field. Perhaps decision-making can be the skill rather than kayak appraisal. This is where algorithms come into effect. Machine learning. Data structures and neural networks.

Maybe. But before we go there, consider a few types of decision-making processes outlined in the book.

  1. Singular evaluation. Here we judge options on their own merit. Do I go to the sushi place? I don’t feel like sushi. Do I go to the Moroccan place in Portland’s Alphabet district? Yes.
  2. Comparative evaluation. True to the name, we compare options. Do I get sushi here or drive to the Moroccan place? The drive sounds nice so let’s do Moroccan. FOOTNOTE Notice that this example the choice on what to eat has nothing to do with actual food.
  3. Optimization. What is the absolute best thing I can possibly get? Optimization is hard and we tend to save this strategy for big, infrequent decisions. But we also use it in situations where we don’t really know what we want or where there is a lot of information. We tend to think this is how good decisions are made.
  4. Satisficing. Coined by Herbert Simon, this is the approach where one finds the first option that works. Think “good enough”. It’s efficient and, as a strategy, is the prevailing strategy in a lot of what I see in literature on expert decision-making in management fields.

This is a good framework but it can be better. The reality is that the first two concepts are really more like the processes for conducting the approaches of the last two. Think of it this way:

Optimization? Use comparative evaluation.

Satisficing? Use singular evaluation.

Here’s why this matters: we shift too much between both approaches. This happens in just about every long, frustrating meeting I’ve ever attended or led. Because we don’t deliberately identify our decision strategy going into the meeting, we waffle between the two until something just “feels” right. Or until we’re tired. Here’s an example:

Executive A: “Let’s find a good option.”

Analyst: “You got it boss!”

One day later …

Analyst: “Here’s a good option.”

Executive B: “Hmm … but is this the best option?”

Executive A: “Good point. Find the best option.”

Analyst: “You got it boss!”

Four months later …

Analyst: “Here’s the best option.”

Executive A: “Sounds good. Let’s do it.”

Executive B: “I don’t know. So much time has passed, let’s reconsider what we really want.”

Executive A: “I just want a good option.”

Executive B: “Well, why not go back to the good option then? I mean, why overthink it?”

Executive A: “Sounds good. Let’s do that. Go with the good option. Let’s ignore everything that we asked our analyst to develop over the past four months.”

Analyst: “I hereby tender my resignation.”

How many times has this happened? Thousands of times? We have unemployed analysts all over this great land and they’re all jobless because they cannot deal with our inconsistent thinking.

But the lesson from Gary Klein is that we can, and should, become deliberate about the way we make our decisions. Choose a strategy, articulate it, and stick to it. Experts can do this because the strategy is ingrained in their intuition. It’s second nature for them. With enough exposure, we can have this, too.

Hard decision with a lot on the line? Use optimization via comparative analysis. Produce a report.

Everything else? Satisfice with singular analysis. Produce a memo.

Try that for a month. No switching from one to the other. No deep exploration on anything but the most critical items. Be deliberate about it. I think it will be easier and no less effective.

One Of The Few Good Reasons To Plan

In a few chapters, Gary Klein completely changed the way I think about planning. I’m a habitual planner (a professional one, too) but I usually hate it. This gives me a lot of neuroses but the skepticism also makes me better at it. Especially after reading this:

“Sometimes planners engage in lengthy, detailed preparations that quickly become obsolete, yet they continue with the same process, again and again; it appears that the function is to help them all learn more about the situation and to calibrate their understanding rather than produce plans that will be carried out more successfully.”

Note the first word of the quote. Sometimes? How about practically all the time. We produce detailed items that are out-of-date at the point of publication. This gets to the criticism of “plans just sitting on a shelf”. Lengthy, detailed plans are a dead-accurate sign of uncertainty and inexperience. Not necessarily of the planner; the inexperience is often found in the participants involved. So planning is a great tool to help people deal with new issues. Not as a means of control, however.

Gary Klein rightfully recognizes plans aren’t for control but for simulation. A detailed plan produces a single permutation of all possible outcomes. One that is highly unlikely to be realized. This is why people critique the artist’s depiction of a future city landscape by calling it the “artist’s deception”. That’s not fair but it’s pretty funny. The beautiful images simply imagine, simulate, and give a sense of feel. They don’t, and can’t, predict.

As Klein puts it:

“On the battlefield, plans are vulnerable to the cascading probability of being carried out successfully, many decision makers will feel confident whereas the actual probability of carrying out the plan is just over 50 percent since the probabilities multiply.”

Mike Tyson said it better: “Everyone has a plan until they’re punched in the mouth.” A deep truth, of course, and more (painful) evidence that planning is not prediction, shouldn’t be treated as such, and that every plan has a very obvious, very powerful threat to its success. That threat is called reality.

I know that probably sounds academic but it really justifies the iterative approach. I’ll explain this more at a later date but a powerful aspect of modern approaches, especially in technology and software development, is the abandonment of “waterfall” styles of project planning (which were far more about predictable consistency) for LEAN styles, which are more about continual improvement.

The Stories We Want

Consistency turns out to be a really difficult aspect of our thinking. We want consistent logic even if the logic isn’t necessarily pointing us to truth. As Klein puts it:

Logic is indifferent to truth. The goal of logic is to root out inconsistent beliefs and generate new beliefs consistent with the original set. Logic does not consider whether our beliefs are true. A logical person can be wrong in everything he or she believes and still be consistent.”

I suffer this from time to time. It’s a minor hobby. I’ll weave a beautiful tapestry of logic that explains a wonderful set of beliefs that are completely inaccurate. You do this, too.

Consider Flat Earth logic.

“If the world were round, a plane flying in the air would have to keep its nose pointed down or else it would fly into space. But when you fly, the plane is level. Because the Earth is level. Because the Earth is flat.”

Sounds reasonable, right? Logical. Consistent. So it must be true.

When you put so much effort into constructing logic of this sort, you want it to be real. It’s a story you’ve created for yourself. And we love the narratives we create. But also, as Klein explains:

“Rigor is not a substitute for imagination. Consistency is not a replacement for insight. Most of the time we would rather have consistency than leave ourselves open to the problems created by inconsistency.”

We would rather have consistency because consistency follows logic and logic allows for prediction and prediction allows for certainty and everyone loves certainty. When you can claim certainty, you can also claim you are right. And everyone else is wrong.

Future book reviews will build on this notion but, for now, let’s just remember that people really hate to be wrong about anything. This is a massive flaw. Consider the opposite trait which, according to Marc Andreessen, is embodied in the world’s best hedge fund managers. Here’s a quote from a fantastic Tim Ferriss podcast:

“I’m one of the few people who will openly admit I love spending time with hedge fund managers, I think they’re awesome. They’re fantastic people and they’re the most open minded people I know. They love when you tell them that they’re wrong. They get all excited. Their eyes light up. They’re like, “Why? Why do you think that?” And they’re genuinely interested. Because if you’re right and they’re wrong, they will change their minds. And they’re hedge fund managers, so they’ll literally reverse the trade. If they were long a company, they’ll flip around and go short.”

Hedge fund managers don’t want logic unless it serves them in finding truth because finding truth is how they find better trades, mo’ money, etc. Short of having the truth, they will of course rely on logic. But to Andreessen’s point, they’ll gleefully abandon the logic at a moment’s notice.

Back to Klein, the problem here is rooted in a classic human disposition towards the narrative fallacy. There are stories that we tell ourselves, stories through our plans and desired predictions, that lead us to do things that turn out to be bad decisions. The phrase “It seemed like a good idea at the time” is a dead giveaway that someone has suffered from the narrative fallacy.

The Premortem

This is my favorite concept from the book. The “premortem” is a technique to use in any decision-making process and is probably the single best hedge against downside uncertainty that any single individual can use. As a mental model, this is something of a leverage point in the system of all other models. Here’s Klein’s illustration of the idea:

“Simulate that [your] plan has been carried out and, six months later, it has failed. Why did it fail?”

This is the premortem. It keeps us from solely telling the story of how everything will succeed because we love our plan and our story and we worked hard and darn it everything should be perfect now. There’s no reason to not love your plan or your story. There’s also no reason to not imagine what happens if it fails. Temper your biases. It’s the path to better decisions.

Conclusion

There is so much more to this book. There always is. I hardly even touched on the actual “Sources of Power”. Much more to consider. I can’t recommend this book enough.

Buy it on Amazon.

Photo by Sharon McCutcheon on Unsplash

Mental Models

  • More often than not, a difficult choice comes from a lack of understanding what you want.
  • Novices decide slowly, comparing lots of approaches. Experts decide faster by generating a single course of action through experience.
  • There are two basic decision approaches: optimization and satisficing.
  • There are two basic decision processes: comparative evaluation and singular evaluation.
  • Hard decision with a lot on the line? Use optimization via comparative analysis. Everything else? Satisfice with singular analysis.
  • Expand the experience base if you want to make decisions more quickly.
  • Planning is mental simulation.
  • Mental simulation is vulnerable to narrative fallacies.
  • Premortems are a hedge against narrative fallacies.
  • Develop decision scenarios. No better way to understand your desires and the system you’re working within.
  • Complexity is neither fragile nor robust. It can be either. It can be a sign of sophistication or a sign that the plan is likely to break down.
  • To solve an ill-defined problem, clarify the goal even as you try to achieve it, rather than keeping the goal constant.
  • Flying behind the plan–the condition of overwhelm where a person cannot generate expectancies.
  • An expert is someone who can critique and correct themselves quickly.
  • Generate feasible options quickly.
  • Rigor is not a substitute for imagination.
  • Consistency is not a replacement for insight.
  • The antigoal. Goals are wanted outcomes. Antigoals are unwanted outcomes.
  • A good story has these necessary features: drama, empathy, wisdom, plausibility, consistency, economy, uniqueness.
  • Working memory. This is the ability to hold information for brief periods of time. Once resolved, the memory is deleted.
  • Long-term memory. This is the ability to store information to retrieve later. The key is storing such information with multiple members in case one leaves
  • Limited attention. Teams can only discuss/work on one thing at a time.
  • Perceptual filters. Teams do not have direct experience but must depend on secondhand reports that can introduce inaccuracies.
  • Learning. Teams need to learn in many ways, such as acquiring new procedures, discarding inefficient behaviors and figuring out how to become more effective.
  • We prefer consistency rather than the “problems” of inconsistency.
  • Describe your intent with as little information as you can.
  • Hold no more than ten beliefs.
  • A decision is considered poor when the knowledge gained would have led to a different decision in a similar situation. This is hindsight bias and regret.
  • Think probablistically. It’s the best way to stay nimble. Anticipate failures and the unexpected by scoring (guessing) the probability you’re right. It’s never 100%
  • Don’t just accept small errors. Make them more visible.

Originally published at strivingstrategically.com on August 17, 2018.