What Happened with the Polls in 2020?

Vinod Bakthavachalam
Vinod B
Published in
7 min readDec 1, 2020

Forecasting Retrospective

The final forecast our model gave for the 2020 election was that Biden was likely to win with an 80% probability compared to Trump with a 20% probability. It expected him to win around 333 electoral votes with an 80% confidence interval of 239 to 407 and win the popular vote on average by 6% with an 80% confidence interval of 0.07% to 13%.

While votes are still being tallied, it appears that Biden will win 306 electoral votes and the popular vote by 4%.

The model in 2020 missed four states (3 of which it thought Biden would win but he ended up losing to Trump and 1 in the reverse): Florida, Iowa, North Carolina, and Georgia. This is similar to the number it missed in 2016, which was the three states of Wisconsin, Pennsylvania, and Michigan.

Overall, our model seems to have given Trump higher odds than other sources, but it seems to suffer from the same issues as forecasts in 2016 did whereby the polls appeared to understate Trump’s overall chances. The mean forecast of our model appears too overoptimistic given what happened in reality (though it did capture the fact that Biden was likely to win even with a 1–2 standard deviations in typical polling error, which proved true in the actual results, suggesting the probability may have been ok).

Let’s dig into the data to compare the 2020 forecast to those in the two previous elections (2012 and 2016).

Here is a scatterplot showing the predicted Democratic vote share in each state and DC in each election on the x-axis vs. the actual Democratic vote share on the y-axis.

Generally, the points line up fairly well with the y=x line, and there appears to be little systematic difference across years. This suggests that 2020 was not an aberrant year compared to 2012 and 2016 in terms of poll accuracy.

Diving deeper into the average error by year, we see that among all states polls were actually a bit more accurate on average in 2016 than 2020; however, polls were off more in close states where the margin was within 5%. This resonates as states like Florida, Wisconsin, Michigan, and Pennsylvania were closer than expected among others.

These errors are also not symmetrical by party. More recently it seems that polls have systematically underestimated Republican support in states starting in 2016 and continuing in 2020. This is also more of an issue in the close states where the margin was 5% or less as the chart on the right below shows.

The states where this happens for two straight elections have also been mostly the same such as Florida, North Carolina, Michigan, Wisconsin, and Pennsylvania. This suggests that there is something different about these states that makes polling harder in them.

One thing to keep in mind about these states as well is that their votes are historically quite correlated as the heat map below shows. It plots the correlation between democratic vote shares in presidential elections between 1980 and 2020 in the closest states in the 2020 election.

States like Wisconsin, Pennsylvania, North Carolina, Nevada, and Michigan are quite correlated with one another. This means that if polls are off in one state, they are also likely to be off in these other states, suggesting they may have missed something more systematic about the electorate.

At a high level the polls did not do too poorly all said and done, especially compared to other years. However, the error was asymmetric like in 2016 and seemed to underestimate Republican support in the closes states to a higher degree than 2016. This raises questions of why the polls were more off in this manner.

A Review: What Happened in 2016?

The overall conclusion around polling error for 2020 is quite similar to what pollsters found in 2016. The issue was that they seemed to be underestimating Republican support in specific states and traced the error back to not weighting their survey populations to match the population distribution on education.

The way polling typically works is that pollsters draw a random sample of voters from a national or state population. They then survey those people; however, not everyone responds. Using the raw set of responses will be biased therefore because of this nonresponse bias. To correct it, pollsters typically weight responses based on a set of demographic characteristics. Before 2016, these were typically gender, race, and age. They would weight responses to ensure the distribution in the sample matched that of the population they were targeting (so if the sample responses had fewer Asians than the state being surveyed, they would upweight the responses of Asians in the sample).

This ensured that the sample appeared demographically similar to the target population.

However in 2016, education was a crucial demographic factor in predicting the support of voters and many pollsters did not adjust for this factor in their survey weighting. Voters without a college degree were much more likely to support Trump while voters with a college degree were much more likely to support Biden.

As many polls did not weight their survey samples to match the population of the nation as a whole or the relevant state, they were underestimating Trump’s support since higher educated voters were also more likely to respond to polls.

Many pollsters changed their methods to weight by education going forward, meaning this should no longer be an issue in 2020 and beyond.

What Happened in 2020?

Despite correcting for sampling error in education, it appears that similar issues of underestimating Trump’s support, especially in key states, still occurred.

There are three main potential hypotheses for what could have happened to the polls in 2020:

  1. The shy Trump voter hypothesis whereby Trump/Republican voters are scared to admit they were voting for him
  2. There was a late swing towards Trump that polls did not have time to fully capture
  3. There was another source of error in the respondent weightings in polls

The shy Trump voter issue has been debunked by comparing live polling data to non-live polling data. If the hypothesis were true then we would expect live polls to show systematically lower support for Trump than other polls, but this is not the case.

Furthermore, it is hard to see how this shy Trump voter propensity would have changed so much between 2016 and 2020 to explain the increase in underestimation of Trump’s support between the two elections.

On the late swing towards Trump, there is some evidence of this. Polls just before election day showed a tightening of the race in the exact states that ended up being the closest. However, this was not universally true across the states that seemed closer than expected, meaning it can’t completely explain the results.

The final hypothesis concerns whether there was something missing in the way that pollsters weighted their samples. Currently pollsters tend to use random digit dialing to randomly call a cellphone or landline number and then adjust their sample to match the population on demographic characteristics like gender, race, education, and age typically.

One thing they don’t typically try to match on though is geography or density of the location that someone lives in. However, density is becoming an increasingly strong predictor of voting patterns. Indeed essentially all the Democratic vote comes from high density, urban centers while the Republican vote comes from less dense, rural areas. This is true even in typically blue or red states as the county level map for 2020 shows.

As the plot below shows, the correlation between Democratic vote shares at the county level in 2016 and 2020 was also quite high, suggesting that a county level forecast would have been quite accurate as well.

The Conclusion

It seems that the polls overall were not too bad, but they did systematically underestimate Trump a bit. It is likely that this is because the re-weighting that pollsters do does not take into account the geographic divide, which is becoming an increasingly important factor in the way people vote.

Much like 2016 it seems that small changes to methodology as a result of 2020 could improve polls for the next election.

--

--

Vinod Bakthavachalam
Vinod B

I am interested in politics, economics, & policy. I work as a data scientist and am passionate about using technology to solve structural economic problems.