What can we learn from those that are able to predict the future?

Superforecasting: The art and science of prediction

Keeping TABS
4 min readNov 20, 2015

DEEP READS: Our pick of reading material that’s worth your time

We’re all forecasters. Every decision we make — from buying a house to deciding what to have for lunch — is influenced by our implicit predictions of the future. But according to political science writer Philip Tetlock, the average person is ‘roughly as accurate as a dart-throwing chimpanzee’.

That said, some are better at predicting than others. In his book Superforecasting: The art and science of prediction, Tetlock a new name for this elite group; they are superforecasters. In a scientific analysis of the decision-making of a select few, Tetlock explains why this small group of people are capable of such accurate predictions of the future. His case studies were found via an extensive study involving tens of thousands of ordinary people who set out to forecast global events. Some of the volunteers turned out to be remarkably accurate — 60% greater than average, in fact.

So what makes a superforecaster? Tetlock theorises that they have a set of personality traits he refers to as a ‘growth mindset’. This is a unique combination of determination, self reflection and a willingness to learn from their own mistakes. Most forecasters are more concerned with why they’re right or wrong rather than if they’re right or wrong. They may not have lots of specialised knowledge, but these superforecasters are constantly looking for ways in which they can improve their performance. Tetlock proposes that such predictive abilities can therefore be learned; that we all have the ability to become superforecasters.

With his grey beard, thinning hair, and glasses, Doug Lorch doesn’t look like a threat to anyone. He looks like a computer programmer, which he was, for IBM. He’s retired now. He lives in a quiet neighborhood in Santa Barbara with his wife, an artist who paints lovely watercolors. His Facebook avatar is a duck. Doug likes to drive his little red convertible Miata around the sunny streets, enjoying the California breeze, but that can only occupy so many hours in the day. Doug has no special expertise in international affairs, but he has a healthy curiosity about what’s happening. He reads The New York Times. He can find Kazakhstan on a map. So he volunteered for the Good Judgment Project. Once a day, for an hour or so, his dining room table became his forecasting center, where he opened his laptop, read the news, and tried to anticipate the fate of the world. In the first year, Doug answered 104 questions like “Will Serbia be officially granted European Union candidacy by 31 December 2011?” and “Will the London Gold Market Fixing price of gold exceed $1,850 on 30 September 2011?” That’s a lot of forecasting, but it understates what Doug did.

Doug’s accuracy was as impressive as his volume. At the end of the first year, Doug’s overall Brier score was 0.22, putting him in fifth spot among the 2,800 competitors in the Good Judgment Project. Remember that the Brier score measures the gap between forecasts and reality, where 2.0 is the result if your forecasts are the perfect opposite of reality, 0.5 is what you would get by random guessing, and 0 is the center of the bull’s-eye. So 0.22 is prima facie impressive, given the difficulty of the questions. Consider this one, which was first asked on January 9, 2011: “Will Italy restructure or default on its debt by 31 December 2011?” We now know the correct answer is no. To get a 0.22, Doug’s average judgment across the eleven-month duration of the question had to be now at roughly 68% confidence — not bad given the wave of financial panics rocking the eurozone during this period. And Doug had to be that accurate, on average, on all the questions.

In year two, Doug joined a superforecaster team and did even better, with a final Brier score of 0.14, making him the best forecaster of the 2,800 Good Judgment Project volunteers. He also beat by 40% a prediction market in which traders bought and sold futures contracts on the outcomes of the same questions. He was the only person to beat the extremizing algorithm. And Doug not only beat the control group’s ‘wisdom of the crowd’, he surpassed it by more than 60%, meaning that he single-handedly exceeded the fourth-year performance target that IARPA set for multimillion-dollar research programmes that were free to use every trick in the forecasting textbook to improve accuracy.

Get hold of a copy here

Read more like this at Canvas8.com

Written by Rebecca Smith, researcher at Canvas8

--

--

Keeping TABS

Trends, anthropology, behaviour and strategy from the Canvas8 community