How to Read Polls (And Not Infuriate People)

Author’s Note: While my political views are well known and easily found, it’s in the spirit of non-partisanship that I’m writing this post, and you shouldn’t take it as direct commentary on the Labour leadership election or the US Presidential election. Take it as good life advice (as with everything I say).


As an amateur psephologist (poll-watcher) I’ve rent much of my hair watching people tweet excitedly/gloatingly/despondently about opinion polls favouring one person or another for years. This is a handy guide intended to teach you what’s meaningful, what isn’t, and what to watch out for.

Part 1: Hardly Anything Means Anything

A lot of events are covered by musings on whether they’ll shift the polls. Usually, they never do, or at least not permanently. If there is a poll bump or dip for one party around a major event, it’s naturally siezed on, but watching the trends usually shows a return to the norm very quickly.

There are a few things which can and do shift polls permanently or extensively:

  • Wars (starting or ending)
  • Major terrorist attacks (and then only if the governing party’s response is strong/weak)
  • A change of government or leader (known as a honeymoon period)
  • Spectacular by-election wins/defeats
  • Wall to wall coverage of one party (i.e. during a party conference)
  • Highly controversial new legislation or responses to same
  • Major crises, e.g. the 2000 fuel crisis, Foot and Mouth, the 2015 refugee crisis

Things that do not shift public opinion (mostly):

  • Impassioned Commons speeches (nobody is watching)
  • Setpiece shows of strength, e.g. rallies (at least not directly)
  • “Major interventions” by high ranking figures, or at least only in the sense that they take up news coverage then denied to the opposition

Part 2: Pollsters Aren’t Biased

Let me tell you the big secret about market research companies. They don’t really care about politics. After all, expensive though they are, even the daily YouGov polls they carried out in the run up to the General Election in 2015 weren’t going to pay the bills on their own.

The reason pollsters carry out political polling is that it attracts press attention and gives them a chance to show off their skills to potential customers. Most of their money comes from more standard market research for private companies, finding out what people think about brands and products.

This, incidentally, is why pollsters continue even when they get major events wrong. While stating public opinion at Remain 52%, Leave 48% is a huge difference semantically to Leave 52%, Remain 48%, in pure mathematical terms it’s effectively 50/50, and that’s good enough for a clothing manufacturer to figure out how customers see them against other brands.

It wouldn’t make any sense for a pollster to deliberately show a result that wasn’t true, because they’d only be demonstrating to business that they could tell them what they want to hear — and that’s a terrible business model.

Part 3: Nobody Asked Me!

Here’s the thing about sampling — it’s about quality, not just quantity. This story has been told before, but it’s worth telling again.

The 1936 US Presidential election was fought between Franklin Roosevelt, standing for re-election, and Alf Landon, a Republican Senator from Kansas. A popular magazine, the Literary Digest, decided to carry out an extensive poll, much bigger than the widely ignored George Gallup, who used a sample size of 50,000 (still enormous by today’s standards).

To this end they send out 10 million forms by post. These were distributed to every Literary Digest reader, then to every registered car owner, and to every registered address with a telephone. In all, 2.4 million people responded — a spectacular response rate.

The poll showed a massive victory for the Republican over Roosevelt, and the sample size was used to defend it — after all, it was a significant chunk of the whole population.

In the 1936 election, Roosevelt carried 46 states to Landon’s two, and over 60% of the popular vote. The Literary Digest poll was proved useless, while Gallup’s much smaller sample gave a final poll of within 5% of the real result.

Where the Literary Digest went wrong was not in the number it sampled, but in who they were. In 1936, anyone who could still afford a magazine subscription in the depths of the Great Depression was by definition wealthier than average. The same went for owning a car or a telephone. The sample size was admirable — its composition was not.

Gallup, meanwhile, used census data to model his panel of respondents on the nation as a whole — a technique still used with some refinements to this day.

Polling companies use different methods to achieve representative samples. Some, like YouGov, use a panel of contributors with known attributes (socioeconomic status, geographic location, employment sector) to which they serve poll questions in the right direction until they achieve a representative sample. Others, particularly phone polling firms, collect many responses then take a representative sample from them.

By doing this, a sample of 1,000 can have a 95% chance of being accurate to within 3% of a real result. This does not mean polls are never wrong, but they have a good chance of giving you a good idea. Curiously, despite the larger population, a sample size of 500 seems more common in American opinion polls. I’ve seen no reason given for this, and it may just be a case of standard practice. In any case, expect a margin of error of about 4%. You need a reasonably large sample, but not a huge one — and these polls are mostly accurate enough to do the job.

Part 4: Ah, but Without Weighting…

Weighting on opinion polls is like the seatbelts in your car. You can take them off, but it won’t cause you anything but trouble.

Sometimes, it’s not possible to get a completely representative sample of the population. You might get too many men, women, young people, old people, white people or ethnic minorities. Usually the problem is that you get too few of a group, often the young.

In this case, you can use weighting to try and make your poll a bit better. If you know young people are 20% of a given population, but only 15% of your poll respondents, you can stretch each response out by a third to make the sample look basically right.

This isn’t as good as a proper representative sample — it’s more vulnerable to outlier results, and the internal margin of error for that demographic group goes up the fewer you have.

But it’s almost always going to be better than taking the weighting off and using raw figures, because if you know that young people are 20% of voters but choose to only look at figures which are demonstrably wrong about their representation, you’re necessarily much more likely to get the result wrong.

Part 5: What About the Don’t Knows?

How you handle Don’t Knows depends on which polling company you’re discussing.

A common method is to reassign a certain percentage of Don’t Knows back to the party they voted for last time, on the assumption that old habits die hard and people not enthused by any option will stick with the devil they know.

This can be a good assumption, but one which can be broken when there are major shifts in political allegiance all at once. For example, some polls last Parliament gave the Lib Dems semi-respectable poll ratings of 12–13% based on this reweighting of former Lib Dems who now expressed no preference.

This was flawed — the party only got 8% at the election — because it failed to account for the very large change in the status of the Lib Dems from general third party of opposition to junior coalition partner. It also downplayed the sea change in Scottish politics that saw the destruction of Scottish Labour MPs across the country.

Another approach is to filter out Don’t Knows entirely. This too is flawed, because many of those people will vote, and might go unaccounted for. The thinking behind this approach is that Don’t Knows who vote, by definition fairly non-tribal, will break down similar lines to the population as a whole, and be unlikely to affect any result.

Part 6: Likelihood to Vote

Asking people how likely they are to vote is a tricky business, because plenty of people give you an 8/10 and then don’t show up. It’s pretty much impossible to ensure people vote the way they’ve told pollsters, so how do you handle this measure? We know likelihood to vote does make a difference — in 2015, Labour supporters were less likely to vote than Conservatives, which probably handed the latter their majority — but it’s hard to always tell how, or how to deal with it.

A common approach is to only publish the views of the 10/10s, because they’re the only ones you can be sure of.

A good general rule to follow is that outside exceptional circumstances, non-voters don’t vote. The two recent exceptions to that rule have been the Scottish Independence referendum, and the European Union Membership referendum, both of which commanded high turnouts (and both of which, notably, were referenda not elections).

Part 7: Watch the Trends

It’s easy to get excited or depressed by single polls, but Point 1 is very much in force. What polls are much better for doing than telling you where you are is telling you where you’re going.

A five poll average is good, a ten poll one is better. You will need to figure out how much weight to give the older polls as compared to the newer ones (Anthony Wells of UK Polling Report has a formula) but that’s left as an exercise for the reader.

It’s best practice only to compare results from the same company in a rolling average. Different pollsters ask different questions, and use different methods to assess the answers. Compare YouGov with YouGov all you like, but don’t compare YouGov with ICM or Populus. If you’re looking for changes rather than voting intention per se, comparing one rolling average with another to see where both are going can be of some use, but a pinch of salt is best applied.

That 95% confidence interval means you can trust most polls to be within 3% of accurate. But it also means around 5% of polls may not be. Taking any one in isolation is too much of a risk — a rolling average helps to reduce the effect from outliers.

Part 8: What’s Being Polled?

Pollsters are better at some things than others. Pollsters are very good at asking for a General Election voting intention. They’ve been doing it for 70 years. Asking the public how they’re going to vote in the Lincolnshire Police and Crime Commissioner elections or a by-election in Waveney? There’s less precedent for that.

While they try their best with weighting and methodology, and these things will be tested, it’s harder to get a good result when you haven’t got a set of past data to look at and compare against.

Part 9: Things That Aren’t Polls

It’s common to see some things sent around on Twitter claiming one party or another must be collapsing/storming ahead on the basis of something that would make George Gallup spin in his grave.

The following things are not polls:

  • Facebook Polls
  • Google Forms Polls
  • Twitter Polls
  • Polls on news websites
  • Daily Express polls
  • Surveys to local residents by MPs

The common link between all these is that there is no attempt at weighting or gaining a representative sample. These sorts of things are a magnet for people who have strong opinions and want to express them.

Newspapers love them, because they’re another form of engagement with content, which guarantees increased page views and ad revenue. As a journalist, I completely understand their appeal and cast no shame on those who use them — I simply caution against taking them as any kind of representative sample. If they were accurate, the country would be run by a UKIP/Green coalition.

A note on Push Polling:

Push polling is another form of not-polling, but deserves a separate mention. It’s actually a fairly clever form of political campaigning, carried out by political parties as a form of communicating to the voter under the guise of communication from the voter.

How it works is this: The phone canvasser rings the voters and explains that they’re calling to ask about the upcoming election, in which candidate Ann Jones is standing against Bob Smith. They then ask something like: “Does the fact that Ann Jones saves homeless kittens make you more or less likely to vote for her?” followed by “And does the fact that Bob Smith hates sunshine and children make you more or less likely to vote for him?”

The intention is not to gather data or draw conclusions, but to plant doubts and impressions in the mind of the voter, under the impression of having been polled for research purposes.


Opinion polls are of immense use, to the press, to political campaigners and to the general public as a whole. They are in general to be trusted, and respected, but taken as a whole and given the importance they deserve (some, but not total). Hopefully this guide will help you understand what’s significant, what’s not, and how to view things in the proper context.

If you’re a pollster and I’ve got something badly wrong, do feel free to leave a comment — I’m happy to learn and correct.