Presidential Polls Appear To Influence Public Opinion More Than Measure It
The influence of polls is enormous for something that is intended to measure public opinion … not influence it. As we head into the first caucus, there is no justification for polls that include limitations like a robot pollster asking a person on a landline to “Press 9 to hear more candidate names.”
The new Emerson Poll out of New Hampshire is being cited all over the internet. This 657-respondent poll weights its results by age, education, gender and even the mode of the interview. People either took the survey by landline or on a computer.
Somehow it’s perfectly acceptable to poll in such a way in which seven candidates are in the first set robo-read to the respondent and if the respondent wants to hear more choices, they have to “press nine” on their telephone.
The limitations of robocalling for this election are immense due to the sheer number of candidates in the field. See, there aren’t enough buttons on the phone to accommodate these choices in one swoop.
Polls conducted by live pollsters always include instructions for the randomized order of candidates for a reason. This matters. This alters the results.
Furthermore, what logic was used to structure this order? It’s not alphabetical, obviously. It’s also not the order in which they entered the race. It’s certainly not how well they are doing in other polls, because Steyer and Gabbard absolutely clobber Deval Patrick, who didn’t even enter the race until November 14 and is polling at less than one percent in New Hampshire. It’s certainly not ordered by the number of unique donors. So exactly how did Steyer and Gabbard end up on “page two” while Deval Patrick was on page one?
That’s not even the half of it though. Remember when I mentioned that 657 people were surveyed? There were plenty more respondents actually surveyed, but 1245 people didn’t finish their surveys, so their results were discarded. Maybe they started to get irritated towards the end of the survey once the questions about having to choose from the top four candidates came around and they either logged off or hung up. I know I might have.
But wait… There’s more.
According to the disclosure information, other respondents’ answers were left out of the final results too … even if they had completed their surveys. The pollsters wanted to make sure that the people they called were real people that were paying attention. So, they additionally disregarded all respondents with “unusual patterns.”
They didn’t explain what specific unusual patterns were examined for discarding purposes. In the book Survey Methodology, many examples of unusual behavior are given which do include more than just the obvious “all answers are the first option” factor. For example, unusual patterns of responses among a set of correlates would also apply.
Now, it would obviously be unusual for an 18-year-old to hold a doctorate degree of course, but this survey asked for an age range, not an age. So, it wasn’t that. One might wonder, “Would it be unusual for someone about to vote in the Democratic primary to think that the DNC debates have been unfair and that the Senate shouldn’t remove Trump from office?” Maybe it’s unusual to keep answering, “someone else” for the final chunk of questions.
We don’t even know how many unusual Americans’ surveys were discarded. Two? Forty-two? Five-hundred?
We Must Keep In Mind Margin-of-Error/Credibility Interval Figures
So, Andrew Yang is hanging out at six-percent in the Emerson poll. But, with a credibility interval (similar to a margin-of-error) of more than three-percent, Yang could actually be polling at nine-percent and Klobuchar could be at seven-percent.
Also, Why Is There Such A Discrepancy In Landline Percentage Of Candidates’ Votes?
As I mentioned above, the survey results indicate that weighting was done based on mode as well as other factors. Of those polled, 72.88% were polled by landline. When you factor in a 3.8% credibility interval, you might expect that between 69.08 and 76.68 percent of any (or even most) candidates’ supporters would be landline respondents. Yet, somehow, only one candidate’s landline percentage fell in this credibility interval range: Tulsi Gabbard.
We might assume that this is evidence indicating that they took the candidates in the second group and adjusted their scores to better represent the percentage of landlines-vs-online polls, but Steyer, the other “second page” candidate who received support in the poll saw his support drastically underrepresented among landline percentage compared to online percentage.
Some Other Issues To Think About
People who outright refuse to take these surveys are people who might not fit into just any blue suit. The more independent a voter is and the more anti-government they are, the less likely they will even participate in a survey, according to research out of the University of Nebraska — Lincoln. That doesn’t mean that they will sit out this nominating contest though. There are more independents than there are true blue Democrats and only a third of all Americans even trust the government “to do what is right,” according to The Atlantic.
And all this is assuming a pollster could even reach the “average voter” by landline or a generally unheard of opt-in online survey platform that is apparently not accessible by mobile browsers:
We Must Look Deeper Than The Highlights
Pundits discuss polls without actually explaining the results or even the methods. They tidy it up for us in easy to understand graphics, because many readers don’t even want to read full articles. Most often, it’s up to us, to click links or search the net for full polling results. Sometimes, polls will be released but the full results neglect to explain the methods used to choose the people being polled. Very rarely, you will be able to find a downloadable document containing raw data.
This debate season alone, I’ve seen a candidate left off of the debate stage because of not scoring high enough in polls, despite surging in unique donors and google inquiries. Yet, examining full poll results showed that had just one or two more people chosen that candidate in a DNC-approved poll, they would be on that stage. When one person’s answer to a survey can mean the difference between tying for fourth place and tying for fifth place and consequently the public’s perception of the candidate’s viability, our dependency on polling should raise serious ethical questions.
When we hold on to that poll result as though it’s Gospel, despite a three percent … five percent … or even six percent margin of error/credibility interval and a multitude of other factors, we are being foolish. The limitations of polling are so great and diverse in this modern era, that I believe they are better at altering public opinion than measuring it.