Louisiana Governor’s Polling: Who Got it “Right”

The primary election for Louisiana’s Gubernatorial election wrapped up on Saturday night with a result that seemed impossible just one year ago. Democratic State Rep. John Bel Edwards finished the primary with 39.9% of the vote, with former frontrunner Republican US Senator David Vitter limping in with only 23%. Republican Public Service Commissioner Scott Angelle rallied to 19.%, while Republican Lt. Gov. Jay Dardenne brought up the rear with 14.9% of the vote.

No one saw that coming a year ago. But how about last month?

Polls produced by independent (or quasi-independent) groups were few and far between. By my count, there were nine major (widely-reported) polls on the Governor’s race in the final five weeks of the campaign. So who got it “right”?

First, some clarity: “Right” is a difficult concept in and of itself. A bad poll can correctly predict the outcome even if it is methodologically deficient. A good poll can miss even if it follows the highest standards. Anyone can get lucky.

More troublingly, a poll can (and must) be right at the time it is sampled. The refrain, “a poll is only a snapshot in time” is both true and confounding. It tests a thesis that cannot be proven because it hopes to predict a future outcome. On Tuesday, candidate A might lead, but on election day, Candidate B might win. A poll saying the former is not “wrong.” The poll saying the latter is not inherently “right.”

Nevertheless, we want polls to be instruments of prediction. By that standard, whom can we trust? In the Louisiana Governor’s race, we saw a wide variety of methods and samples; from online panels, to interactive voice response (touch-tone robo-polls), to live calls from operators. What conclusions, if any, can we make?

Tracked Polls since 9/20/15

Sadly, none. But we can make a few helpful observations

  1. Sample size! With three candidates crowded behind the leader (Edwards), high sample size polls by nature produced slightly more accurate results. This accuracy was helpful. With an electorate of approximately 1.1 million (which was also hotly debated — many assumed a much larger turnout), random sample sizes of approximately 600–800 yield margins in the 4%+/- range. Over 1000 samples yield margins of error in the 3%+/- range (at 95% confidence). That extra point on both sides really makes a difference in a tight election especially with Louisiana’s multiple candidate jungle primary.
  2. Consider Me Lately. The most predictive polls in our sample in order were MRI’s late tracking poll, KPLC’s “likely voter” sample, and Marbleport’s large-sample IVR. None were spot on, however. Polling should become more accurate as it moves closer to the date of the election. Yet, many dismissed Verne Kennedy’s late tracking poll as a product of motivated-reasoning: Kennedy’s client, John Georges, was openly supporting Scott Angelle, and the MRI polls continuously showed Angelle closing in on or eclipsing Vitter (even polling him first in August!). Whether he made his own reality or not, Kennedy did pretty well with his final tracking polls.
  3. Cell phones? Who needs them! Phone polls call registered voters to retrieve their sample. IVR (automatic) technologies do not sample voters who have given their cell phones to the Secretary of State (this is banned by the FCC). The CDC, which does tons of public health surveys, has found that almost 38% of US households are cell phone only. JMC Analytics reports that up to 22% of Louisiana voters list their cell phones in their voting record. JMC suggests, through some A/B testing, that this has little bearing on results. All that can be yielded from our small sample is that it is not clear that testing cell phone voters would yield more accurate polling. Both IVR and live call surveys (the latter of which can reach cells) were able to produce “accurate” results. More study of this is necessary in Louisiana.
  4. The Internet Is Coming. UNO partnered with Lucid, a NOLA-based online market research and analytics firm, to produce an experimental poll similar to those done by the New York Times and YouGov during the 2014 midterms. This tech is different from phone surveys, in that people surfing websites are harvested into an “online panel” by choosing to “answer surveys” after on-screen prompting. Those panels are then fed various surveys, based on their self-reported demographic information. This polling does not currently match with the voter file, making it impossible to know if someone is telling the truth about their voting status. It also isn’t a statistically “random” sample, since the respondents are self-selecting (or opt-in) to the surveys. You can read more from UNO-Lucid here about their methodology. The results? Mixed. UNO’s poll wasn’t bad, calling the vote shares for three candidates pretty accurately. Unfortunately, their results for John Bel Edwards were off by more than 14 points. As this method matures, it will likely over take phone polls due to increasingly diminishing response rate and the proliferation of cell phone-only households.
MRI’s last tracking poll (adjustments = AfAm undecides for the Democrat)

Throughout the polling, it’s clear that both response rate and voter interest really impacted results. The large undecided results, even from self-described “likely voters,” turned out to be a harbinger of the poorer than predicted turnout (38.5% according to the returns). Smart folks had argued prior to Saturday that turnout predictions weren’t that far off of previous elections (2011 was 37%). However, Saturday fell below even the dour predictions from the week prior. Guesses by the Secretary of State and others were in the 47% range.

Looking forward to November 21st, we will no doubt see a raft of polls from outside groups as interest in the election grows nationally. When reading these polls, it’s important to look closely to the pollsters assumptions and track record, and to the methodology employed: is it an auto-dial? Is it an online poll? Has the pollster worked in Louisiana before? Both Marbleport and Triumph, little known pollsters in the Louisiana market, released months of surveys which largely allowed them to gauge the strength of their data against the trendlines. Similarly, long-tenured pollster MRI released almost weekly tracking polls to an internal group, giving some context for their “outside the pack” numbers.

Polls should not be the one-stop shop for understanding politics. There’s so much more to understanding our democratic process. However, polls can rein in some of the excesses of “gut-level” political discourse and properly focus the conversation against the tide of “spin” from political operatives.

In the end, each poll is no more important that the last. The wisdom of the aggregate is the most instructive. Remember kids, “don’t watch the headlines, watch the trendlines.”