The One Thing We Can Agree On as Political Polling Begins: Understanding Statistics

Wisconsin Gov. Scott Walker may be surging as an early favorite for the Republican presidential nomination — but the numbers show the person leading among potential Iowa caucus-goers often doesn’t win. In the Democratic field, Hillary Clinton is the most dominant position ever for a non-incumbent, with odds of clinching the nomination as high as 91 percent.

Over at the 538 site launched by statistician Nate Silver, the race is already on to handicap the 2016 presidential contest, a reminder that the season of campaigning, polls, predictions and surveys is descending upon us.

Which makes it a perfect time to think about statistics.

Even this soon in the game, we’ll start to see polling from all sides of the political spectrum, making us ponder which results are valid and which might be biased. We’ll find ourselves targeted by messages that seem tailored specifically to us — how exactly did that happen? To make an educated choice in any campaign, we need more than ever to understand how statistics serve as the basis of political calculations, framing what we learn and observe about campaigning and the political process.

As you’ll begin to notice, modern campaigns are all about microtargeting, or using data mining techniques to identify and segment groups of voters with similar characteristics and behavior, and, increasingly, to target individual voters. Campaigns then use that data to craft specific ads and messages aimed at particular groups, and to direct get-out-the-vote efforts.

We’ll explain more in this post about microtargeting, and how it works. But before we get there, it’s useful to first understand the history of political campaign polling, and the origins of accurate sampling and surveys. That’s where the legacy of The Literary Digest comes in.

A popular national weekly opinion magazine of the 1920s that featured Norman Rockwell illustrations on its cover, The Literary Digest mailed out sample ballots to its readers during election seasons, and used the results to predict the outcome of the vote. Its poll was a much-anticipated event; up through 1932, it had been accurate. In the summer of 1936, the magazine mailed out 10 million ballots and got back 2.4 million. At the time, this qualified as “Big Data.” On the strength of those results, it predicted a landslide victory — for Republican Alf Landon.

Franklin D. Roosevelt, of course, won in a landslide, with 62 percent of the popular vote.

What went wrong? Along with polling readers, The Literary Digest also mailed ballots to lists of automobile owners and telephone subscribers. At a time when a majority of the nation was jobless and destitute, those who could afford such things were hardly representative. They were wealthier and more Republican than the average voter, and therefore produced a biased prediction.

By contrast, a young advertising executive named George Gallup became convinced in 1935 that what mattered in measuring public opinion was not the quantity of people surveyed, but rather their representativeness — the degree to which they reflected the views of the general population. He believed that 2000 people chosen scientifically would be a better predictor of electoral outcomes than would the millions chosen in the way The Literary Digest did. He conducted bi-weekly polls of that same presidential campaign, showing Roosevelt leading by increasing amounts from August through October — the beginning of a continuum of similar tracking polls for presidential elections still used in various forms today.

Gallup not only predicted correctly that Roosevelt would win, he also accurately predicted the outcome of The Literary Digest Poll. He understood the concept of random sampling, meaning that a small representative sample is more accurate than a large sample that is not representative.

Gallup’s lessons resonate today, even with the use of far more sophisticated polling devices and information. Take Big Data, for example. Big Data are not necessarily good data. In fact, well-designed small sample surveys can produce more accurate results than huge datasets that are just lying around.

All this brings us, as promised, to the advent of microtargeting, a campaign tool that grew to prominence beginning with the 2004 presidential campaign, and a widely-used strategy today. At the most basic level, microtargeting involves taking what we know about some voters, (usually the results of survey calls) and combining it with what we know about everyone (census data, commercial marketing data, etc.) From there, we build statistical and machine learning models that make predictions about the people who weren’t surveyed.

Ken Strasma, who teaches a microtargeting course for Statistics.com, and who served as Barack Obama’s targeting director in 2008, said it’s perfectly reasonable to wonder why campaigns use microtargeting. Why do campaigns want to target specific voters? If the campaign has a good message, why not just get that message to all voters?

First, campaigns are dealing with limited resources. They need to spend their resources where they can get that biggest bang for the buck. That means targeting people who are actually persuadable, and supplying them with the messages most likely to motivate them. Many voters are already solidly committed to one candidate or the other long before Election Day. Communications to these voters are a waste of the campaign’s limited resources.

Later, when the campaign shifts to GOTV (Get Out The Vote), they want to turn out their supporters. For this purpose, it’s important that the campaign has a good sense of who is supporting them, and who might not vote unless contacted. You don’t want to turn out your opponent’s supporters, and you don’t want to waste your resources on your supporters who are going to vote anyway. The ideal GOTV target is someone who will support your candidate if they vote, but who may or may not turn out.

Ideally, a campaign would survey everyone, then only communicate with those who were persuadable, and only turn out those who were supportive but might not vote. It’s impossible to survey everyone, but the statistical and machine learning methods behind microtargeting give us the next best thing: accurate predictions of how any voter would have answered the survey if he or she had been called.

Now that you have a basic understanding of microtargeting, you’ll understand why, if you’re a middle of the road voter, that phone keeps ringing at election time. And here’s a question to think about: Which party is better at it?

Strasma explains that it’s often hard for voters to tell, since press coverage tends to overstate the microtargeting expertise of the winners, and to exaggerate the failings of the losing campaign. For now, he would handicap the microtargeting contest this way:

“My current assessment is that the best Democratic operatives have about an 18 month lead on our Republican counterparts, but that the Republicans are working hard to close that gap,” Strasma says.

Both of those campaigns will be working hard to reach you. Understand the basics of statistics, surveying and microtargeting, and you increase your odds of making an informed choice.

(Peter Bruce is founder of The Institute for Statistics Education at Statistics.com, the leading online provider of analytics and statistics courses since 2002. He also is the author of the newly-released Introductory Statistics and Analytics: A Resampling Perspective. (Wiley)

Follow Peter:
Twitter: @petercbruce, @statisticscom
Websites: www.statistics.com, www.introductorystatistics.com

One clap, two clap, three clap, forty?

By clapping more or less, you can signal to us which stories really stand out.