The Best Book To Improve Your Thinking

Norm Wright
Striving Strategically
14 min readDec 15, 2018

Thinking Fast and Slow

By Daniel Kahneman

Rating: 10/10

Best Line #1: Laziness is built deep in our nature. A general “law of least effort” applies to cognitive as well as physical exertion.

Best Line #2: You must learn to mistrust your impressions. Learn to recognize situations in which mistakes are likely and try harder to avoid significant mistakes when the stakes are high.

Thinking Fast and Slow is a book that explains every bad decision I’ve ever made, every perceptual error, every misread. I’m humbled and more empowered every time I read it. Our author, Nobel-winning psychologist Daniel Kahneman, has given us a real gift here. In my attempt to share this gift, I offered four articles this week on some of the major concepts in the book. They can be found here:

Towards A Balanced Way Of Thinking

A Chicken’s Body Temperature is 144 Degrees

A Bad Outcome Doesn’t Mean A Bad Decision

Avoiding The Narrow Frame

The insights offered throughout the articles and this review will feel unimpressive at times. You’ll read something like the following line from Kahneman …

We are far too willing to reject the belief that much of what we see in life is random.

… and think “Yeah? So?” A lot of things are random and we don’t want to believe that. Not exactly a shocker. But Kahneman explains why:

Our predilection for causal thinking exposes us to serious mistakes in evaluating the randomness of truly random events. But we are pattern seekers, believer in a coherent world in which regularities appear not by accident but as a result of mechanical causality or someone’s intention.

The point, in this case, is that we have serious bias towards cause-and-effect. We try to find it in everything. We are mechanics and wish to see the world as a machine. I heartily admit the tendency in myself. It’s why I love systems-thinking so much.

But it’s also why this book is so important. Kahneman wants us to continue with this perspective. He wants us to be as rational as possible. He wants us to make the best decisions, take the best actions. But to do so, he doesn’t recommend a whole discipline a’la systems-thinking. He offers a better way to practice our cognition through those disciplines instead.

He gives us a better way to think. It’s the ultimate metaskill, a leverage point on par with mindfulness training or behavioral cognitive therapy or the attitudes found in our past Feature of the Week, Factfulness (book review here).

Learning How To Think

The book is so rich in specific tactics that I find myself wanting to quote the entire thing. There’s so much to share. So please go buy the book. Until you do, consider the following:

In general, a strategy of deliberately “thinking the opposite” may be a good defense against anchoring effects because it negates the biased recruitment of thoughts that produce these effects.

Every time I’ve invoked the idea of “thinking the opposite”, I’ve improved my thinking. So consider this idea as a tool, a thought technology, for any of the deep System 2 thinking you do. Trying to decide an itinerary for an upcoming vacation? Well, what would you lose by not developing an itinerary? Or what if you choose just one activity a day? Or perhaps a couple days heavily planned and a couple days lightly planned?

This simple technique is a fantastic way to free yourself when your thinking gets stuck. And our thinking gets stuck quite a bit by a number of effects. Including, as mentioned in the quote above, the anchoring effect. Kahneman is the discoverer of this and so many other cognitive biases that can freeze our mind or lure us towards suboptimal decisions. Here’s some more on the anchoring effect:

You are always aware of the anchor and even pay attention to it, but you do not know how it guides and constrains your thinking because you cannot imagine how you would have thought if the anchor had been different (or absent).

This is a very deep point and researchers have debated it for a while. What Kahneman is saying here is that the anchor is the introduction of an idea. Regardless of what you want to think or say in response, your response is still affected by this anchor. If someone says “I love you”, you can either say it back, which is a lovely expression, or not say it, which is an unlovely expression. The point is that someone has just put you in a binary position. Either you love them, too, or you are a horrible monster incapable of affection. All because they said that particular thing to you first.

See what I mean?

The anchoring effect is well-known in negotiation. It’s the reason many job announcements feature the salary in the advertisement. They want to set the anchor, the base rate, as soon as possible. If you still apply for the job, you almost give them tacit agreement to work for that wage. And even if you have no intent of accepting the job at that particular advertised salary, the negotiation will still be centered on that initial amount.

There’s an interesting trick in this, too. An anchor has more power when it is more specific. So if a car salesman lists a price at $24,795, customers will negotiate closer to this initial number than if the car is listed at $25,000. The specificity of the number somehow lends more credence, so much so that customers feel intimidated or unsure about negotiating too far down. The same is true for houses, too.

Does this mean that you should sell goods at very specific prices? Perhaps. If you don’t want to negotiate.

But let’s borrow our earlier technique from Kahneman and ask ourselves: what if we did the opposite? What if, to sell a house, you listed a price that was less specific but also significantly higher than what you would accept? So instead of a house listed at $410,595, with $400,000 as your acceptable minimum, you could list at $440,000 and luxuriate in the flexibility?

Chances are, according to the research, you could get a better offer this way.

By the way, in thinking about anchoring effects, notice that I listed a house price in the $400,000s. A house valued at that price sounds wildly expensive to some readers and wildly inexpensive to others. All because of the fact you’re anchored to your local condition.

The Press Secretary In Our Minds

I’m going to paraphrase something I heard Seth Godin say: every one of us has a little press secretary in our minds. This person can make our decisions sound sensible, our actions reasonable, and they apply this spin on everything we do in a way that fools others and, more importantly, ourselves. For more on this delightful idea, see Seth Godin’s interview with Tim Ferriss.

This press secretary rationalizes our every move, even the most indulgent ones. After all, we have our reasons! Even if we weren’t aware of them beforehand!

And think about the effect this press secretary has on us. We don’t trust the White House press secretary. Not completely. But we trust the one working in our heads? Kahneman argues we should recognize our rationalizations for what they really are: cognitive twitches, preloaded biases, and incomplete heuristics. I think he’s right. But I cannot rationalize why.

To understand what happens when we rationalize our choices is to understand the real workings of our minds.

No one ever says “I argue this point because I’m prone to affect heuristics.” Nor does a person say “Hindsight bias drives my armchair quarterbacking in this moment.” But the sooner we can label these things, the sooner we can understand why we think the way we do. This awareness can help us be less attached to our press secretaries and less prone to the biases that compel them.

Consider this guidance from Kahneman:

The affect heuristic is an instance of substitution, in which the answer to an easy question (How do I feel about it?) serves as an answer to a much harder question (What do I think about it?).

This might not sound like much but most people’s views on most issues are nothing more than an expression of what they feel. If they had a chance to think about the issue instead, by objectively engaging System 2, they would likely feel something very different.

The so-called debate over the border wall is more of a shouting match over values. The broad majority of people who want a border wall feel “America First”; they feel many other things, too. Meanwhile, the broad majority of people who do not want a border wall feel something akin to “Humanity First”. This is not to say that pro-border wall people don’t believe in humanity; they do. As said in the fourth sentence of this paragraph, they feel many things. Which is precisely the point. The affect heuristic cancels out thought; it surrogates thoughts with feelings, a great big stew of feelings, and those feelings are articulated in a way that projects emotional resonance instead of logical consistency.

It ain’t got to make sense for me to feel it.

For Kahneman to be able to highlight this effect, give it a name, and give us a way to manage around it is incredibly helpful. This fits all the tactics shared from our review of the book Difficult Conversations (book review here) but does so in a more clinical, research-based standpoint.

It Seemed Like A Good Idea At The Time

For another example, consider the narrative fallacy. We covered this to some extent with Gary Klein’s fabulous book Sources of Power (book review here). Kahneman provides a reassertion of the fallacy at work and dives in on the reasons why:

Paradoxically, it is easier to construct a coherent story when you know little, when there are fewer pieces to fit into the puzzle. Our comforting conviction that the world makes sense rests on a secure foundation: our almost unlimited ability to ignore our own ignorance.

To illustrate, let me offer up every nondescript instance of massive embarrassment or painful regret I’ve ever suffered. I have told terrible jokes, pulled absurd stunts, taken massively-flawed approaches, and otherwise failed spectacularly at many things. From romance to parenting to career to conversation to dumb purchases at Costco. Every instance is a case study in the narrative fallacy.

Specifically, every instance involved me composing a long, overwrought story of how what I was doing would “seem like a good idea”. This story showed that doing X or Y would lead to some fantastic promiseland of success. Even the really bad jokes I’ve told, the ones that no one laughed at, were instances where I could easily envision everyone laughing. They’re going to love this, I think. And it’s so believable, the story is so real, that I’m shocked when they don’t.

As Kahneman explains, the narrative fallacy is the stuff of inexperience. We build fantastic stories based on very little information, connecting dots in a way that makes a decent but inaccurate picture. Not only the stories of what we think will happen but also what already has happened.

Inaccurate stories built on insufficient information? That captures just about every conspiracy theory ever constructed. They are the product of the narrative fallacy. Driven by our tendency to seek patterns where there probably aren’t any.

On that last point, next time someone draws some grandiose claim on social media and says “Wake up, sheeple!” think of the narrative fallacy and the intoxicating power it has. Particularly in that moment for that person. As Kahneman puts it:

Declarations of high confidence mainly tell you that an individual has constructed a coherent story in his mind, not necessarily that the story is true.

The Algorithm Over The Expert

Society is becoming more aware of algorithms. Mostly because attention is drawn to scary ideas about them. Filter bubbles in social media; the immersive power of auto-play features on Youtube; the biases in various recommendation engines. A fine book on the topic, Weapons of Math Destruction, paints an important yet bleak picture with some of the wrong that has occured with the wrong formulas run amok.

All the same, back in 2011, Kahneman wrote some things about algorithms and model-thinking that needs to be remembered, embraced, and expanded. To say it briefly, there is tremendous power and potential in these algorithms that, when we get them right, could genuinely serve as a force multiplier for the quality of our decisions. Borrowing heavily from the work of Tetlock (of fox and hedgehog fame), he offers the following:

Why are experts inferior to algorithms? Experts try to be clever, think outside the box, and consider complex combinations of features in making their predictions. More often than not, this reduces validity. Simple combinations of features are better.

Whatever expertise I have, I’ve seen it ultimately toppled by Occam’s Razor on more than one occasion. I get too fancy with my thinking. After all, I’m an expert! I must show it! Common sense is, well, common. And can’t apply here if my expertise is what’s needed. Similarly, from our author (emphasis added):

Another reason is that humans are incorrigibly inconsistent in making summary judgments of complex information. When asked to evaluate the same information twice, they frequently give different answers.

It’s not just children who give inconsistent answers. We are all subject to framing issues. We seldom get the same information in every circumstance we face. More often than not, there’s trivia and narratives under every decision that color it and make it feel unique when it isn’t. “This time is different.” We give different answers because we think we see different situations. Or rather, we just feel different from one moment to the next. So what do we do about it? Kahneman suggests the following:

To maximize predictive accuracy, final decisions should be left to formulas, especially in low-validity environments.

Low-validity environments are those with regular feedback and steady factors. Poker is a high-validity environment. Medical practice, particularly in emergency rooms, is a high-validity environment. All of which is to say, there are some predictions you can trust and some you cannot. Regardless of the expertise. As Kahneman explains:

When do intuitive judgments reflect true expertise? When is intuition actually a good, powerful thing? There are two basic conditions: an environment that is sufficiently regular to be predictable; an opportunity to learn these regularities through prolonged practice. When both these conditions are satisfied, intuitions are likely to be skilled.

Coincidentally, these are also the precise environments where data can be gathered, analyzed, and eventually developed into a formal version of the expert’s intuition that will work even better than the professional.

This is why self-driving cars are a thing. The act of navigating the environment in a car can be reduced to a variety of cues and indicators that generate nearly-instantaneous decisions. When applied to the proper algorithm, you get a formula that performs better than any regular driver. You get auto-pilot programs that are as effective (and far more enduring) than airline pilots. As explained by Kahneman:

Physicians, nurses, athletes, and firefighters face complex but fundamentally orderly situations. The accurate intuitions that Gary Klein has described are due to highly valid cues that the expert’s System 1 has learned to use even if System 2 has not learned to name them. In contrast, stock pickers and political scientists who make long-term forecasts operate in a zero-validity environment. Their failures reflect the basic unpredictability of the events they try to forecast.

So when should we trust experts? Honestly, I’m not sure. For one, I trust no expert who places their expertise on any amount of forecasting. I don’t want stockpickers telling me what to buy or sell. What I want instead are investors who can explain what is working and why. And how it is similar to what worked, or didn’t, in the past.

Coincidentally, this is the reason Ray Dalio’s book Principles is the best view I’ve ever had into genuine expertise in an unpredictable (i.e. “low-validity”) environment. It does not provide prediction. It provides story. It searches for and tests patterns against a marvelous attitude of skepticism and hardcodes those patterns against history to find triggers and cycles. This manifest in Dalio’s second book, Big Debt Crises. Neither book is perfect and I’m eager to feature them both here some day. Until then, the key point of expertise, I think, is as follows:

The expertise we find in high-validity environments is very predictive and, importantly, soon to be replicated in software (in many, but not all, fields).

The expertise we find in low-validity environments isn’t predictive, it’s explanatory. It can’t be replicated in software.

So when it comes to politics, stock markets, education, strategy, management, leadership, human behavior, and other such things, you can spot the expert by their ability to explain the situation with historical corollary, conceptual frameworks, and research-based models. It’s fun to look for those experts because, when you find more than one, you’ll find they each have different explanations and different theories. Which is refreshing because you’ll then see that no one really knows anything. It’s all a matter of style in the end. Outside the predictive environments, that is.

By the way, much of what I’m writing here is certainly derived from Kahneman’s work. But also, as a way to point to other, equally-brilliant sources, please seek out everything being produced by the writer/thinker/nomad extraordinaire Kathryn Hume. She has helped me understand a lot of this, too. I can’t recommend her enough.

Conclusion

I feel some frustration in this book review. Once again, I cannot adequately capture all the beauty and insight that our author has to offer. I’ve not even mentioned prospect theory, loss aversion, or diminishing sensitivity. And what about reference-class forecasting? Not a peep.

It’s been five days of study and yet we’re just scratching the surface. I suppose that is to be expected when we’re talking about the greatest insights from one of the greatest minds of the 20th and 21st century. Daniel Kahneman and his partner-in-crime Amos Tversky are the shoulders upon which future giants will stand. This book, and their work on a broader scale, help us to develop the most important metaskill of all: thought.

Other practitioners like Gary Klein, Dan Ariely, Richard Thaler, and many more deserve high praise, too. Klein, for example, really did write the best book on decision-making. But if there was one book that I could offer that could set a person on a path towards better thinking, it’s this one.

Anyone who can absorb this work and see ways to apply it will invariably be better at what they do. And as your humble friend and writer/servant, I want you to get better. I want you and I to always level up to the best version of ourselves. So please buy this book. Read it, too. Here’s the link to Amazon.

Mental Models and Principles

  • System 1 and System 2 thinking
  • Much like the electricity meter outside your house or apartment, the pupils offer an index of the current rate at which mental energy is used
  • “Law of Least Effort”
  • When people believe a conclusion is true, they are also very likely to believe arguments that appear to support it
  • Confirmation bias
  • Hindsight bias
  • Loss aversion
  • Prospect theory
  • Narrative fallacy
  • Familiarity bias
  • The common admonition to “act calm and kind regardless of how you feel” is very good advice: you are likely to be rewarded by actually feeling calm and kind
  • Cognitive ease
  • The main function of System 1 is to maintain and update a model of your personal world, which represents what is normal in it
  • A capacity for surprise preserves mental health
  • The operations of associative memory contribute to a general confirmation bias
  • WYSIATI
  • Don’t mistake for cause what can be explained by randomness
  • Anchoring effects — battle them by thinking the opposite
  • Affect hueristic
  • The emotional tail wags the rational dog
  • The importance of an idea is often judged by the fluency (and emotional charge) with which that idea comes to mind
  • Base rates matter
  • It is easier to construct a coherent story when you know little and have fewer pieces to the puzzle
  • Halo effect
  • Confidence is a feel. Declarations of high confidence mainly tell you that an individual has constructed a coherent story in his mind, not necessarily that the story is true.
  • The difference in emotional intensity is readily translated into a moral preference
  • True experts know the limits of their knowledge
  • High-validity vs low-validity environments
  • The planning fallacy
  • Losses evoke stronger negative feelings than costs
  • Don’t mistake decisions and outcomes. Good decisions can have bad outcomes and vice versa.

--

--

Norm Wright
Striving Strategically

Trying to provide the most useful thing you’ll read on any given day. Target success rate: 51%. More at www.strivingstrategically.com