Superforecasting

Recently completed the book, Superforecasting, here is a quick summary.

In 2001, the Intelligence Community (IC) of United Stated had a consensus view that Iraq has weapons of mass destruction. Congress acted on this information which resulted in a disastrous invasion that cost a lot of lives and money.

In 2006, the Intelligence Advanced Research Project Activity (IARPA) was created, Its mission is to fund cutting edge research with the potential to make the IC smarter and more effective. In 2008, the office of the Director of National Intelligence, which sits atop the entire network of sixteen intelligence agencies asked the National Research Council (NRC) to form a committee. The task was to synthesize research on good judgment and help IC put the research to good use. NRC recommended that:

1. IC should not rely on analytical methods that violate well documented behavioral principles or that have no evidence of efficacy beyond their intuitive appeal

2. IC should rigorously test current and proposed methods under conditions that are as realistic as possible. Such an evidence based approach to analysis will promote continuous learning needed to keep the IC smarter and more agile than the nation’s adversaries

Based on the above recommendations, IARPA sponsored a massive 4 year tournament to see who could invent the best methods of making the sorts of forecasts that intelligence analysts make every day. The author formed a team using volunteers and called the research team and program the Good Judgement Project. This team won the tournament. 58 of the final 2800 volunteers who scored at the top of the charts are called the super forecasters. At the end of year 1, their collective Brier Score (measures accuracy of probabilistic predictions — lower the better) was 0.25, compared with 0.37 for all the other forecasters — and the gap grew in later years so that at the end of the tournament, superforecasters outperformed regulars by 60%.

A rough composite portrait of the model superforecaster is as follows.

- In philosophical outlook, they tend to be:

o Cautious — Nothing is certain

o Humble — Reality is infinitely complex

o Non deterministic — what happens is not meant to be and does not have to happen

- In their ability and thinking styles they tend to be:

o Actively open minded — Beliefs are hypothesis to be tested, not treasures to be protected

o Intelligent and knowledgeable with a need for cognition — Intellectually curious, enjoy puzzled and mental challenges

o Reflective — Introspective and self-critical

o Numerate — Comfortable w/numbers

- In their methods of forecasting they tend to be:

o Pragmatic — Not wedded to any idea or agenda

o Analytical — capable of stepping back from the tip of your nose perspective and considering other views

o Dragonfly eyed — value diverse views and synthesize them into their own

o Probabilistic — Judge using many grades of maybe

o Thoughtful updaters — when facts change, they change their minds

o Good intuitive psychologists — aware of the value of checking their thinking for cognitive and emotional biases

- In their work ethic, they tend to have

o A growth mindset — Believe it is possible to get better

o Grit — Determined to keep at it however long it takes

On average, when a forecaster did well enough in year 1 to be a superforecaster, and was put on a superforecaster team in year 2, that person became 50% more accurate and it continued in subsequent years. Teams of ordinary forecasters beat wisdom of the crowd by about 10%. Prediction markets beat ordinary teams by about 20%. And superteams beat prediction markets by 15% to 30%.

With scores and leaderboards, forecasting tournaments may look like games but the stakes are real and substantial. In business, good forecasting can be the difference between prosperity and bankruptcy; in government, the difference between policies that give communities a boost and those that inflict unintended consequences and waste tax dollars; in national security the difference between peace and war. If the US Intelligence Community had not told congress that it was certain that Saddam Hussein had weapons of mass destructions, a disastrous invasion might have been averted. IARPA understands the enormous potential in keeping score. That is why it bankrolled the project. Tournaments help researchers learn what improves forecasting and help forecasters sharpen their skills with practice and feedback. Tournaments could help society, too, by providing tools for structuring our thinking about what is likely to happen if we venture down one policy path versus another. Vague expectations about indefinite futures are not helpful. Fuzzy thinking can never be proven wrong. And only when we are proven wrong so clearly that we can no longer deny it to ourselves will we adjust our mental models of the world, producing a clearer picture of reality. Forecast, measure, revise: it is the surest path to seeing better.