Brexit and the Forecasting Business — Part 1 — Shorter-term issues

Indy Neogy
#NoDust on Brexit
Published in
4 min readJun 26, 2016

This is going to be a couple of posts, this one dealing with short-term issues for Forecasting/Futures people highlighted by the recent debate and result in the British referendum on EU membership. The second will focus on the longer-term and bigger problem of helping people face the future.

1) Polling at the end of the line?

The polling companies in the UK were already deep into introspection after the 2015 General Election, which did not turn out as predicted. The referendum appears to tell a similar story, with most of the final polls predicting a victory for Remain. In defence of the polling industry, we have to acknowledge that a UK referendum presents some severe technical difficulties — they are so rare that baseline data just doesn’t exist. Further, they had predicted a Leave win in the run-up and the final result was very close.

Tracking graph from uk.businessinsider.com

Yet, while the failings are excusable, they are nevertheless important. So much of futures work relies on having a reasonable picture of the “now.” The struggles of the pollsters seem to concentrate in two worrying areas.

First, getting people to respond is harder than ever. This obviously isn’t just an issue for politics — every kind of market and sociological research is suffering in the way. The people who will respond are less and less representative of “the norm.” Second, survey questions seem less and less useful in eliciting accurate self-assessments of future action. This showed most painfully for the pollsters in variations in turnout and the assumption of a final turn to the status quo. However, failings here put questions over the quality of conclusions from survey data about intentions.

2) Not a good month for crowd methods

The wisdom of crowds, or at least the prominent methods for assessing it, took a beating during the Brexit campaign. Over a number of years, there has developed a conviction that crowdsourcing knowledge through various means can give a better result than the prognostications of experts or even survey based tools like polls. Across the Brexit campaign, 3 methods got a sizeable amount of attention — all of them were just as lacking in insight as the opinion polls. First, the bookmaker’s odds — while always seen as tainted by the commercial need for the bookmaker (as market maker) to hedge risk this method gets a lot of credence because people are “putting skin the game.” Second, prediction markets, which aim to combine the “skin in the game” factor with an actual marketplace (rather than bookmaker as market maker). Third, the survey method of asking people “who do you think will win?”

Each of these methods has in the past proven more useful a predictor than straight up polling. However, each of them failed to add any value during the Brexit campaign. Two major flaws became apparent: (a) the first two methods basically reflected polling the entire time — they added little useful extra information. (b) The third method gave an indicator which barely moved at all, but that usefully highlights how the initial assumption of the popularity of the two sides in all methods came from very rough polling from the beginning of the campaign. Key takeaway? Unless there’s a reason to believe the crowd contains useful knowledge about the issue at hand, crowd prediction may not add the kind of value boosters have been suggesting.

3) Superforecasting speedbump

The Superforecasting methodology (Tetlock et al.) underwent a significant “natural experiment” test in the form of the Brexit vote, which was a nicely contained event with a binary outcome — and sad to say, the results were no better than the crowd methodologies above. Does this mean “Superforecasting” is never useful? Not at all, but as with every method it is limited, particularly by the information it uses. The suspicion is that Superforecasting, much like the prediction markets, was most particularly derailed by anchoring on early opinion polls.

So what can we learn?

I’m not suggesting any of the methods above are useless or wholly discredited — but what is clear is that:

1) Quality of input data is crucial and we have to explore new sources and methods for getting that read on events.

2) Buried in all these methods are theories about how people make decisions. It’s time to make that more explicit — and think about how we can broaden our set of approaches to include more possible views.

In Part 2 we’ll look at some longer term issues raised in the Brexit campaign.

Part 2 now online!

--

--

Indy Neogy
#NoDust on Brexit

Co-founder and Chief Trend Scanner at KILN. Author of 55 min guide to cross-culture comms: http://55mg2ccc.com also helped create http://www.storyform.co.uk