Predicting Presidential Elections

Chip
4 min readApr 13, 2016

Crammed into a large hall. The cheapest ticket $25. I sat high in the balcony. Nate Silver was speaking at the University of California Berkeley. I looked down at my program. Within the first page I read,

Silver was voted “Coolest Man of the Year” in the San Francisco Chronicle’s reader’s poll last year, after accurately predicting 50 out of 50 states in the 2012 election.

WHAT!!!??!!?!?

First thought: What are the odds of a statistician becoming cool?
More importantly: Is 50 out of 50 impressive?

Nate built a forecasting model to assign a probability to the occurrence that Barack Obama receives more votes than Mitt Romney in a particular state. Adopting data sources such as media polls and news articles to measure momentum in the stead of poor indicating macro-economic factors, he did not say that North Carolina would surely support the Republican and Florida is guaranteed to back the Democrat.

Is it fair to assume that we know nothing about elections? Does 2012 present a radically different environment, making all prior experiences irrelevant? Is a coin the best default for predicting red or blue? If so, Mr. Silver pulled off a 1 in a quadrillion feat.

But that is silly. Why assume that all possible outcomes are equally likely? What is the probability that Romney wins Hawaii? Assigning probabilities to specific unprecedented events is consistent with Bayesian thought. I caution against thinking in certainties or impossibilities, but there is no reason to abandon all of your accumulated knowledge. Bring some historical facts to the table and presume that the past and present are part of the same system. Looking at presidential election results for each state from one election to the next over the past 28 contests, the data suggests that:

1) States don’t wildly switch their party preference between elections.

2) A state is roughly as likely to become more radical as it is to become more moderate.

Example: Nebraska voted 69% Republican in 1952 and 66% in 1956 translating into a shift of 3%. I only included donkey and elephant votes, so it doesn’t matter which party is measured or the direction of the shift. Probabilities associated with shifts can be modeled with a Beta Distribution. The probability of the shift required to change the color of a state from 2008 to 2012 is charted below.

The 2008 results correctly predict all but Indiana and North Carolina in the 2012 election. So is going 50 for 50 as easy as picking a few swing states to swing? Not quite. Given some prior knowledge of elections — and before new information is collected — it can be assumed that Obama had approximately a 7% chance of winning Wyoming. The product of the above probabilities suggests that a perfect forecast is 1:1027937. If you flip 50 weighted coins there is approximately a 1 in 1 million chance of getting all 50 correct. That does not mean that you can round up a million people and be virtually guaranteed to have someone go 50 for 50. Using the probability of a perfect forecast and the Binomial Distribution we see the following:

There are still a quadrillion possibilities, but once you reach 3 million (generated by weighted coins) the probability of perfection has converged close to 100%. From a Bayesian perspective, it is always okay to update your prior knowledge. No need to approach every situation with a clean slate. Next steps might include learning more about elections; like geographic clustering. It is also imperative to learn more about the current environment. Are Dems spending any money in Utah? etc.

Note: Nate didn’t discuss any of this. In fact, I was disappointed that he spent his time talking about himself and his personal politics.

— Chip

--

--