Think Twice — Michael Mauboussin

“‘Consequences are more important than probabilities.’ This does not mean you should focus on outcomes instead of process; it means you should consider all possible outcomes in your process.”

Review:

I was originally going to pick up The Success Equation by Mauboussin but I couldn’t get a paperback version, so I looked through some of his other books and Think Twice (from 2009) seemed interesting enough. I occasionally read Mauboussin’s research papers on various aspects of financial markets, and I find the topics he explores to be very interesting and his writing clear. So it is with some disappointment that I found this book to be fairly unimpressive and not very thorough.

While everyone would benefit from reading seriously about cognitive biases and how to make better decisions, I found this book to merely skim the surface on a lot of these topics, throwing in the occasional interesting anecdote or example here and there as the author discussed anchoring and other heuristics. Any reader would be doing themselves a favor if they put down this book and instead went through Kahneman’s Thinking, Fast and Slow or Cialdini’s Influence: The Psychology of Persuasion (both of which Mauboussin references throughout).

That said, there were a few ideas in the book which I appreciated. One of which being the fact that as humans, we tend to try to simplify complex, adaptive systems — we try to weed out cause and effect or try to infer something about the macro from the micro. But Mauboussin says that we cannot understand such systems by mere extrapolations of the system’s components; new characteristics emerge from the interactions of the individual parts that may be beyond our simplistic understanding. This explains a good deal about why we tend to be terrible forecasters of stock markets and similar complex systems. The author also goes through some other mistakes in our thinking like the narrative fallacy and our inability to consistently separate skill from luck (which I presume formed the basis for The Success Equation).

I did enjoy how Mauboussin puts a small list at the end of each chapter to give the reader some practical tips on how they can reshape their decision-making process given these insights. Keeping a decision journal, where context and what was being thought at the time are crucial, seems particularly useful to prevent hindsight bias. His points on referring to “the outside view” (the statistical outcomes on similar situations to the one you are analyzing) is very practical, and emphasizes looking at base rates instead of relying on our own overconfidence. This portion of the book was probably the beginnings of a great paper he published on all sorts of base rates in markets (link).

Overall, this book was short and to-the-point, but didn’t manage to add anything new to the world of cognitive biases and psychological phenomena. There are some practical tips inside along with some entertaining examples, but I don’t think anyone can really top Kahneman in this arena.

Score: 5/10


Notes:

Intro

  • But information without context is falsely empowering. If you do not properly understand the challenges involved in your decision, this data will offer nothing to improve the accuracy of the decision and actually may create misplaced confidence.
  • But it’s crucial to bear in mind that because of the substantial role that luck plays in this process, good decisions don’t ensure attractive outcomes. If you make a good decision and suffer a poor outcome, pick yourself up, dust yourself off, and get ready to do it again.

The Outside View

  • We tend to focus on the inside view — considering a problem by focusing on the specific task that is close at hand, and makes predictions based on that narrow and unique set of inputs
  • It comes naturally but almost always paints too optimistic a picture
  • Due to illusions of superiority, optimism, control
  • Examples: Corporate M&A, taking multiple anecdotes as evidence, planning fallacy
  • The outside view asks if there are similar situations that can provide a statistical basis for making a decision
  • Steps for using the outside view
  • Select a reference class — broad enough to be statistically significant buy narrow enough to be useful in analyzing the decision you face
  • Assess the distribution of outcomes — average, most common, extreme successes/failures
  • Make a prediction — knowing that your forecast is likely still slightly too optimistic
  • Assess the reliability of your prediction and fine-tune — the worse the record of success is or the more ambiguous cause-and-effect is, the more you should adjust toward the mean

Open to Options

  • Anchoring bias is symptomatic of the broader problem of an insufficient consideration of alternatives
  • Adjusting from an anchor usually only entails a search for a plausible estimate, and we usually stop too short
  • Anchors are particularly powerful in situations with limited information or uncertainty
  • Representative bias, availability heuristic, cognitive dissonance, confirmation bias, stress, and seeking causal relationships that don’t exist also diminish our ability to make decisions well
  • Reasoning from a set of premises and compatible possibilities often lead us to fail to consider what is false or outside the realm of our initial assumptions
  • How the problem is described, how one feels about the problem, and one’s individual knowledge of it will also shape how we reason about it
  • Avoiding tunnel vision
  • Explicitly consider alternatives — using base rates and market guidelines
  • Seek dissent
  • Keep a decision journal — to avoid hindsight bias
  • Avoid making decisions while at emotional extremes
  • Understand incentives

Expert squeeze

  • Experts rarely outperform crude extrapolation algorithms, less still sophisticated statistical ones (especially for probabilistic situations with a wide range of outcomes).
  • The collective wisdom of many uninformed decision-makers or predictors can actually be very accurate (and often better than the best among them), assuming there is diversity, aggregation, and proper incentives
  • Intuition therefore works well in stable environments, where conditions remain largely unchanged (e.g., the chess board and pieces), where feedback is clear, and where cause-and-effect relationships are linear. Intuition fails when you are dealing with a changing system, especially one that has phase transitions.
  • Mismatch problem — relying on quantitative criteria or standards that aren’t actually good predictors of future performance

More is Different

  • Complex adaptive systems (ones with properties and behaviors that are distinct from those of the underlying agents) cannot be understood through a simple extrapolation of the properties of the underlying components
  • Ex: listening to individual investors will give you scant insight into the market
  • Addressing one component of the system can have unintended consequences for the whole
  • Analyzing individual performance within a complex system requires sorting through the relative contributions of the individual versus the system; when we err, we tend to overstate the role of the individual

Phase Transitions

  • “During the run-up to a crash, population diversity falls. Agents begin using very similar trading strategies as their common good performance is reinforced. This makes the population very brittle, in that a small reduction in the demand for shares could have a strong destabilizing impact on the market.”
  • Repeated, good outcomes provide us with confirming evidence that our strategy is good and everything is fine. This illusion lulls us into an unwarranted sense of confidence and sets us up for a (usually negative) surprise. The fact that phase transitions come with sudden change only adds to the confusion.
  • Reductive bias — tendency for people to treat and interpret complex circumstances and topics as simpler than they really are, leading to misconception (i.e. taking a complex system like the market and thinking of it as linear/simple)
  • “Consequences are more important than probabilities.” This does not mean you should focus on outcomes instead of process; it means you should consider all possible outcomes in your process.

Luck and Skill

  • Say results are part persistent skill and part transitory luck. Extreme results in any given period, reflecting really good or bad luck, will tend to be less extreme either before or after that period as the contribution of luck is less significant.
  • The main lesson is that feedback should focus on the part of the outcome a person can control. Call it the skill part, or the process. Feedback based only on outcomes is nearly useless if it fails to distinguish between skill and luck.
  • Rosenzweig suggests that the press will praise a company that is doing well for having “a sound strategy, a visionary leader, motivated employees, an excellent customer orientation, a vibrant culture, and so on.” But if the company’s performance subsequently reverts to the mean, onlookers will conclude all of those features went wrong, when in reality nothing of the sort happened. In many cases, the same people are running the same business with the same strategy. Mean reversion shapes company performance, which in turn manipulates perception.
  • The more look contributes to outcomes, the larger sample size you will need to distinguish between skill and luck