Here We Go Again.
by Danny Franklin and Jess Reis
For the second presidential election in a row, Democrats were blindsided by an election much closer than the polls predicted. This time, the error was not big enough to flip the result. But in many ways, it was more jarring. Not only were the errors larger numerically than in 2016, but they came after four years of soul searching and methodological refinements that promised a better result.
As an important caveat, with votes still being counted, it is too early to precisely state how wrong polls were, much less why. The full diagnosis will not even be possible until state voter files are updated in the months ahead.
And not all the polls were wrong. The success of certain pollsters, most notably the Selzer Iowa poll (which we wrongly believed was an outlier), suggests that when the methodology matches the moment, you’re still likely to get it right. There are good reasons to use phones or online or a mix; lists or random-digit-dialing, depending on the need. But one size will never fit all because all elections are different.
But accurate calls were lonely exceptions within the polling averages and predictive models, cluttered as they were with the results from cheaper online or robocall methodologies. For all the faith we put in those models, it’s clear that the average of good science and bad science is not better science, and it never was.
But we should not sugarcoat the size of the error. Five-Thirty-Eight’s model, which, in fairness, is only as good as the polls it’s based on, projected an 8-point win, well above the likely final result. Polls in states that saw big errors in 2016, such as Wisconsin and Michigan, were far off once again. And polls in other states that had once been accurate, such as Texas and Florida, were off the mark. Not only did we seem to fail to fix the mistakes of 2016, we made entirely new ones.
So what happened, and what will it mean for polling?
The mistakes appear to have come from a misreading of the lessons of 2016. After underestimating support for Trump, most pollsters agreed that the culprit was our inability to correctly adjust for the fact that less educated Americans took pollsters’ calls at a lower rate than those with a college degree. Most high-quality pollsters corrected for that, weighting up or collecting more interviews from non-college voters, and expected a better result. It wasn’t to be. Comparing polls to results showed big surprises among college and non-college voters alike, Hispanic populations in South Florida and Texas border communities, and the rural Midwest.
How does polling fix this? One answer is to stop thinking about representativeness one-dimensionally. Demographics isn’t destiny. The polls in 2020 weren’t wrong because they underrepresented working class voters. It was because they underrepresented those whose mindset led them to support Trump. These voters, nearly by definition, distrust civic institutions, such as polling, and choose not to participate. In 2016, the dynamic seemed to be contained to white working class voters. It has spread.
Some pollsters have begun to think about weighting on attitudinal markers, such as belief that the Bible is the literal word of God. While these are interesting fixes after the fact, it’s hard to know, state by state, congressional district by district, what the correct proportion of voters are believers in either Bible or corporal punishment. Not to mention the fact that these attitudes change over time.
While we’re still early in the search for solutions, one that we are interested in media habits. How we consume media, political or otherwise, is increasingly both a reflection and driver of attitudes and lifestyle. Moreover, its more measurable than ever before. By aligning a poll sample with what we know are the balance of media habits or even something as simple cable or broadband subscription, perhaps we can calibrate our samples and better represent NPR-bag-toters and OANNanists alike. This will require much better integration between polling and the media agencies keeping their fingers on these trends. But for those on the strategic polling side, it could both improve our accuracy and the precision of our targeting.
The polling industry will come through this because polling is still, imperfect though it may be, is an essential tool for developing strategy and messaging. Pollsters will learn, adjust, and improve. And that that will only enhance the value to what polling is really for — messaging and strategy development.
Whatever this means for polling industry, though, skepticism toward horserace predictions can only benefit how we think and talk about elections. We grew very comfortable in the belief that polling could eliminate election uncertainty for us. It does not. But why should that matter? Credit shouldn’t go to those who accurately guess who will win an election, but to those who do the most to make it happen.