A Peek into the Future

A Review of Philip Tetlock and Dan Gardner’s Superforecasting

West of the Sun
9 min readApr 27, 2018

“You tend to believe that history played out in a logical sort of sense, that people ought to have foreseen, but it’s not like that. It’s an illusion of hindsight.”

Jason Zweig, of The Intelligent Investor column fame, said this was “The most important book on decision making since Daniel Kahneman’s Thinking, Fast and Slow.” If that’s not an endorsement I don’t know what is. Since I’m of the belief that Kahneman’s book should be required reading in high-school, I was quite pleased when this book lived up to that recommendation. Tetlock’s work on forecasting spans many years with his Good Judgment Project, and the insights he’s gained from following the predictions of hundreds of people (with great precision) on very difficult, nebulous questions are useful to anyone trying to improve their decision-making or forecasting.

Tetlock has spent years determining what makes a good forecaster — someone who can answer a question like, “As of 1 July 2018, how many manufacturers will hold permits for driverless testing of autonomous vehicles in California?” with a degree of precision and granularity that would be startling to most. He spends the first part of the book explaining how he set up this research project (that involved mostly random volunteers who were interested in forecasting), how prediction scores would be calculated, and how participants would be ranked over time. Every adjustment to an initial forecast would tracked and recorded, affecting a participant’s final score. After following particular forecasters who consistently outperformed both averages and prediction markets — where actual money was on the line — Tetlock came to a number of conclusions that are incredibly insightful for both forecasting and problem-solving:

  • What separates the elite from the average isn’t intelligence, but rather a style of thinking — those that forecast better than average tend to be less confident and more willing to settle within ranges of uncertainty. They tend to be less ideological and less driven by a few “big ideas” that create cookie-cutter explanations for complex questions. They try to look at problems from as many different angles as possible, and synthesize as many views as possible. I found this largely echoed a lot of Charlie Munger’s teachings, in which he always warns against ideological thinking and instead recommends an inter-disciplinary approach to everything.
  • Good forecasters have intellectual humility — they recognize that the world is incredibly complex, that seeing reality as it is is a constant struggle, and our own judgments are almost always riddled with mistakes. But don’t confuse humility for self-doubt. The best forecasters and decision-makers have the humility to carefully reflect on their choices, and then the confidence to take decisive action. Tetlock quotes famous poker player Annie Duke when speaking of this type of humility and the balance that must be struck.

“It’s very hard to master and if you’re not learning all the time, you will fail. That being said, humility in the face of the game is extremely different than humility in the face of your opponents.”

  • Break impossible problems down into tractable sub-problems — we have to hone in on what parts of the problem are irreducible and those that are more knowable. We can make fairly decent probability estimates just from using a crude set of assumptions and guesstimates that are testable or within a range of reason.
  • Always use base rates or “outside views” first — most people like to jump into a problem or question by focusing on the particulars of that specific situation. They usually fail to consider the general probabilities of relatively similar situations of a broader class. For example, for an investor looking at a stock like Alphabet, they might extrapolate that company’s success and sales growth out indefinitely into the future. They dominate the search market and [insert any number of reasons you like the company]. This type of thinking may often lead us to poor conclusions; if we don’t consider what other companies in similar situations to Alphabet in the past did, we might be letting the recent past too heavily influence our forecast. Michael Maubossin has a great piece on base rates, or average probabilities, for company metrics like sales growth and profitability; his work may make you reconsider how optimistic you might be on a given company. Tetlock’s point is that by starting with the average and then adjusting for the particulars of a situation, we set ourselves up for improved accuracy.
  • Look for errors in your mistakes, but watch out for hindsight bias — Annie Duke also reminds us in her book Thinking in Bets to be on the look out for “resulting:” deciding a choice was good simply based on the outcome. Tetlock would agree with this. We have to remember to view our decisions in the light of both the information we had and the uncertainty we faced at the time. This takes a conscious effort to avoid fooling ourselves into thinking that any given outcome was inevitable.
  • Try, fail, analyze, adjust, and try again — whenever we’re dealing with uncertainty or imperfect information, we are going to experience failure; there’s simply no way around it. Those who make the best choices (or predictions) in the future are those who are constantly learning, constantly adjusting their prior beliefs based on new information, and constantly going out into the world and synthesizing anything useful anyone else has to say. Tetlock stresses that it’s not just a growth mindset that superforecasters need — it’s also the grit needed to slog through being wrong countless times. We never get to an end destination where we’ve reach the extent of our abilities. We should always be in what Tetlock calls “perpetual beta.”

Books like this one, Duke’s Thinking in Bets, Kahneman’s Thinking Fast, and Slow, and Gardner’s The Science of Fear are great because they try to teach us how to better understand reality and ourselves. They tease out the mistakes we make because of our biases or emotions, and expose how they negatively affect our decision making. Superforecasting in particular gives the reader a number of tools and actionable pieces of advice to better improve their understanding of complex situations (or at least how they might begin to tackle them). Beyond that, it also stresses the importance of incremental improvements. Small positive changes, whether they be in our predictions or decisions, compound over time.

Overall, this was very well-written. It’s filled to the brim with interesting experiments, anecdotes, and insights. And it overlaps or complements many other works on cognitive biases, decision-making, and dealing with uncertainty. Definitely worth a read.

Score: 8/10

Notes:

· The non-linearity and sensitivity of complex systems means that seeing far into the future is virtually impossible; however, this doesn’t mean all prediction is futile

o How predictable something is depends on the subject, how far into the future we’re looking, and under what circumstances

o Good forecasting and foresight is a skill that can be improved through certain ways of thinking, of gathering information, and of updating beliefs

· Our natural urge to explain reality isn’t a bad thing, but the speed at which we move from confusion and uncertainty to a clear and confident conclusion is — we don’t spend enough time in the intermediate step considering all possibilities

o We rarely seek out evidence that undercuts the first explanation we cling to, even if that first explanation wasn’t checked for reliability, and we tend to dismiss contradictory information (confirmation bias)

o Feelings must be replaced with finely measured degrees of doubt that can be reduced with better evidence — that’s the only way we can create more accurate mental models

o Intuition is only relatively reliable if you work in a world full of valid cues you can unconsciously register for future use, and even then you may get many false positives or need double-checking

· Keeping score

o If we’re serious about measuring and improving, vague language about possibilities won’t suffice

o Forecasts must have clearly defined terms and timelines, use numbers, and be one of many that we collect and compare; many forecasts are needed to calibrate our predictions with reality (to see whether we’re over or under-confident)

o Two types of forecasters:

§ Those that did worse than average: tend to be very ideological and organize thinking around “big ideas,” seek to squeeze complex problems into preferred cause-effect templates, treat information that doesn’t fit with their big idea as irrelevant, tend to pile up reasons why they are right and others wrong (resulting in higher confidence and more ambitious predictions), and are reluctant to change their minds even when predictions clearly fail

§ Those that did better than average: tend to be less confident and more willing to settle within ranges of uncertainty, tend to look at problems from many different angles, try to aggregate as many sources of information and perspectives as possible

o What separated the forecasters who could provide real value and those who couldn’t beat simple algorithms wasn’t intelligence, but style of thinking

· Superforecasters

o Replacing the difficult question “was it a good decision” with the easier question “did it have a good outcome” is a dangerous line of thinking that can lead us to many incorrect conclusions

o Fermi estimation

§ Breaking down a difficult or seemingly insurmountable question into knowable and unknowable parts (without falling victim to bait-and-switch temptation)

§ Figuring out what assumptions we can make with reasonable confidence, and what assumptions we would have to make for a premise to be true

§ Making use of base rates (how common something is within a broader class, or an “outside view”) and then adjusting based on the particulars of the situation; avoid starting with the inside view as to avoid anchoring

§ Don’t delve into every piece of information possible, but rather only what is relevant to the assumptions that need to hold for a given premise to be true — then knock down each hypothesis one by one

§ Constantly look for other views (outside, inside, expert opinions) you can synthesize into your own and adjust your estimate up or down

§ Often times stepping back from your first estimate, assuming it is wrong, considering why that might be, and making another judgment can improve the initial one

§ Remember that beliefs are hypotheses to be tested, not treasures to be guarded

o Differences between average forecasters:

§ Superforecasters are better at identifying questions with “irreducible” uncertainty (where it is very difficult to outperform base rates or averages), keeping initial estimates within 35–65% and only tentatively moving outward

§ Regular forecasters are far more likely to use 50% or even odds when asked to make probabilistic judgments

§ Superforecasters are far more granular and precise, sometimes arguing for percentage point differences in estimates

§ Probabilistic thinkers are less distracted by “why” questions (fate, meaning, reasons behind the happening of events) and focus more on “how”

· Updating

o Superforecasters are more likely to frequently update their forecasts given new information, but forecasters must be careful with both under and over-reacting to new information

§ Under-reaction: belief perseverance and strong commitments (reputational, career-wise) to forecasts can make us much less likely to respond to new information; the less ego we have involved the better our updates can be

§ Over-reaction: diluting our initial estimate in the face of additional but not relevant information that we should ignore

o The best forecasters take the middle ground between under and over reaction, typically frequently updating their initial forecasts with incremental increases/decreases (say, less than 5%) based on new information and the weight of that information

· The growth mindset

o For superforecasters, being wrong is an opportunity to improve and learn, and admitting a mistake is just a simple step to getting closer to the truth; or, in other words, failure is not equivalent to reaching the limits of your ability

o Try, fail, analyze, adjust, try again

§ Try — learning or reading about a subject is not sufficient to generate the tacit knowledge we get from doing

§ Fail — experience must be accompanied by clear, prompt feedback

§ Analyze and adjust — postmortems should be as careful/critical as the thinking that goes into making the initial forecast; detailed comments in real-time will help you analyze your thinking after the fact

§ Try again — even with a growth mindset, forecasters need grit to improve in the face of being wrong many, many times over the course of a career

Phrases/Quotes:

· A brilliant puzzle solver may have the raw material for forecasting, but if he doesn’t also have an appetite for questioning basic, emotionally charged beliefs he will often be at a disadvantage relative to a less intelligent person who has a greater capacity for self-critical thinking. It’s not the raw crunching power you have that matter most. It’s what you do with it.

· “You tend to believe that history played out in a logical sort of sense, that people ought to have foreseen, but it’s not like that. It’s an illusion of hindsight.”

· The humility required for good judgment is not self-doubt — the sense that you are untalented, unintelligent, or unworthy. It is intellectual humility. It is a recognition that reality is profoundly complex, that seeing things clearly is a constant struggle, when it can be done at all, and that human judgment must therefore be riddled with mistakes.

--

--