Were the Polls in 2016 Wrong?
When Nate Silver declared on November 1st, 2016 that “Donald Trump Has a Path Victory,” many political analysts were quickly incensed at the notion. Donald Trump was rarely viewed by the media as a serious political opponent as he cruised to the Republican nomination. The prevailing notion at the time was not if Hillary Clinton would win the presidency, but rather, by how much? Of the nearly 50 national polls conducted in the month of October, only 6 had Donald Trump in the lead. How could he possibly have a path to victory given those polling numbers? Well as we well know now, he did indeed have a path to victory
What Went Wrong?
For one, the media became overfocused on the national polling numbers while virtually ignoring the emerging story in the Midwestern battleground states (Pennsylvania, Michigan, and Wisconsin). Even though the national numbers had tightened during the month of October, Hillary Clinton’s lead in the polling aggregate on RealClearPolitics was still outside of the margin of error.
The Margin of Error?
The margin of error is a way to determine precisely how much random sampling error there is in a given poll. Why do pollsters get random samples in the first place? Because it’s not economically or practically feasible for any pollster to reach out to every single registered voter to perform a survey. As a result, these pollsters take random samples of the registered voting population. However, to account for the fact that the poll was performed on a random sample of the population and not the population itself, the pollster always provides the margin of error for the poll. Take this quote from the pollster Selzer & Company from a poll they conducted on behalf of Bloomberg Politics in the final days before the election:
Percentages based on the subsample of 799 likely voters in the 2016 general election may have a maximum margin of error of plus or minus 3.5 percentage points. This means that if this survey were repeated using the same questions and the same methodology, 19 times out of 20 times, the findings would not vary from the percentages shown here by more than plus or minus 3.5 percentage points. Results based on smaller samples of respondents — such as by gender or age — have a larger margin of error.
Selzer and Company tucks away at the very bottom of their poll a nugget of information that is lost on the general public when the media reports about polls. Even though a pollster reports a certain result from their poll, they are not at all guaranteeing that their result is exact. In fact, they are saying for 95% of samples taken just like theirs was, their results will fall within plus or minus 3.5 percentage points of their results.
Why Is This Important?
While the margin of error might seem like a minor detail to some, the concept is absolutely crucial to interpreting poll results. Take our Selzer & Co poll for example: the poll’s findings were that Hillary Clinton was leading by 3 points in the final run-up to the election, but the margin of error in this poll was a sizable 3.5 percentage points! To many Americans, that seemed like a significant lead, but in reality, Selzer & Co had declared that any result within the +0.5 Trump (-3.5 percentage points) to +6.5 Clinton (+3.5 percentage points) range was valid for their poll. Suddenly her lead no longer seemed that large, right? Her lead of +3.2 percentage points (in the RealClearPolitics average of polls) before the election seemed impenetrable at the time, but with an understanding of how margin of error works, we can now determines that it was most certainly not. Of course, that’s not even accounting for the fact that she ended up winning the popular vote, but losing the electoral college. The RealClearPolitics polling average of +3.2 percentage points for Clinton actually ended up being quite close to her final margin of victory in the popular vote of +2.1 points. But can our newfound understanding of the margin of error help explain why she lost the electoral college? Yes, indeed!
The Forgotten States
Prior to 2016, the last time a Republican had won the states of Michigan or Pennsylvania was 1988 and the last time a Republican had won the state of Wisconsin was 1984. These states were considered part of the Democratic Party’s “blue wall,” states that they had won year after year since at least 1992. Much has been made of Clinton ignoring the three rustbelt states that ended up costing her the election, but for the purposes of our analysis, we want to understand if a better understanding of margin of error might have helped us foresee this outcome.
As we can see in the above chart, Clinton’s lead in Pennsylvania was fragile at best. Political analysts around the country, as well as the Clinton campaign, took Pennsylvania as a given for Clinton. In reality, every single poll conducted in the final week of the election pointed to a possible Trump victory in the state. Clinton did not not have even one poll in the state that gave her a lead outside of the margin of error.
As for Michigan and Wisconsin, Clinton did end up having a lead just outside of the margin of error in both states. However, an understanding of the margin of error would certainly have helped better inform the popular opinion at the time that Clinton’s lead was commanding. In addition, the polls in these states had other issues outside of the scope of this analysis. As Nate Silver said in his analysis from November 1st, 2016 that was linked to earlier:
This time around, we haven’t seen too many of those polls in Clinton’s firewall states, such as Colorado, Pennsylvania, Wisconsin and Michigan. But that’s misleading, because we haven’t seen many high-quality polls from those states, period! We have seen lots of polls from North Carolina and Florida — for some reason, they get polled far more than any other states — and plenty of them have shown Trump gaining ground, to the point that both states are pure toss-ups right now.
What are the Final Takeaways from 2016?
While there have been countless post-mortems written about the 2016 election, there are without a doubt lessons to be learned for future elections. For the 2020 election and beyond, political analysts would better serve the public if they make sure to mention margin of error whenever they report on all political polls and clearly explain it each time (not just a footnote at the bottom of the article). This will ensure a better informed public as well as reduce the distrust in political polls that 2016 caused in the public’s eye. Finally, we must be wary of the polling aggregate of a particular state when the numbers of polls run in that state are comparatively low to other states.