A psychic mollusc, a coach-load of school children and a presidential candidate walk into a pub …

The perils & pitfalls of audience research

The world of audience research can be a scary place full of perils. So, I’d like to share with you three tales about the traps and pitfalls that an audience researcher can encounter, and how to avoid them. These tales involve a psychic mollusc, a coach-load of school children and a winner who lost.

A psychic mollusc
This of course is Paul the octopus. In case you don’t already know, Paul correctly predicted the results of 8 matches during the Fifa 2010 World Cup final. This included 5 correct predictions in a row. The probability of that happening purely by chance is around 3% — improbable enough to get it published in some peer reviewed journals.

This isn’t Paul, just another octopus

So was Paul a psychic mollusc? Sadly, no — improbable though it seems, his football foresights were an illusion. Although a 1 in 32 chance looks unlikely, that’s not taking into account the many, many other animals that were being used at the same time to try and predict World Cup results. You didn’t get to hear of those other animals because they were all rubbish at it. The only one you got to hear about was the one who happened, by chance, to get some of the results right.

The moral of this story is, if you only look for evidence of success you get a wildly distorted picture of reality. When you conduct audience research — whether it is assessing the impact of a completed exhibition, gathering feedback about a proposed a new online resource, or testing a prototype interactive exhibit — you’ve got to actively look for evidence that things aren’t working, as well as for evidence of success. I’ll explore in a later article, ways to encourage visitors to provide thoughtful honest answers, and how to convey bad news in an evaluation report.

A presidential candidate
It’s the run-up to the 1936 US Presidential election. Democrat incumbent President Franklin Delano Roosevelt is up for re-election. His opponent is the Republican Governor of Kansas, Alf Landon.

The Literary Digest — a highly respected and widely read publication — decided to commission an opinion poll to predict who would win. In fact, the Literary Digest commissioned the largest pre-election poll that had ever, and probably has ever, been conducted. Over 2 million people responded. By comparison most election polls today have samples of just a few thousands.
The Literary Digest pre-election poll is famous not only for being the biggest ever undertaken. It is also famous for getting the result really, really wrong. The poll predicted Landon would win the election by a huge landslide. In fact, it was Roosevelt who won by a landslide. The Literary Digest got the result wrong by an astonishing 19%. In recent elections polls that were off by 6% were considered a miserable failure. This one was wrong by more than three times as much.

Governor Alf Landon — Republican candidate in the 1936 US Presidential election (United States Library of Congress)

How could a survey based on such a vast sample be so wrong? One of the main causes was the way in which the Literary Digest gathered respondents. These came from three sources — its own readership, registered automobile owners and telephone directories. The problem is that in the depths of the Great Depression, many people couldn’t afford a car, a telephone or a subscription to the Literary Digest. And the people who couldn’t afford such things were much more likely to vote Democrat. The Literary Digest had amassed a huge sample but it was massively skewed towards people who voted Republican — to use the technical term sampling bias. No wonder it looked like Landon was going to win by a landslide, and no wonder they got the result so wrong.

The moral of this tale is that just because it’s a big sample, doesn’t mean it’s a good sample. To be effective a sample has to reflect the population you are studying. I’ve come across surveys of museum visitors that only included adults visiting on week days during school term time. Presumably whoever was collecting the data didn’t want to work on a weekend. Unsurprisingly this sample was almost completely devoid of the large number of family groups who visited the museum and presumably, many adults who work during the week. As a consequence, the results were highly suspect.

Time and again I hear colleagues and clients utter the dreaded words “it must be true; it was a big sample”. One more time folks — size of sample, ain’t the same as quality of sample.

A coach-load of school children
Shortly after I was appointed Head of Audience Research at the Science Museum, one of my colleagues from the Learning team came to me with a request. She wanted me to measure how long it took school children to disembark from the coaches that brought them to the museum. After a moment’s reflection I politely, but firmly, declined the request.

Why so? Well consider this. The road and pavement outside the museum were owned by the local borough council. The museum had no control over traffic flows or parking regulations. Nor did we have any control over the design of the coaches. Nor could we control the velocity at which children disembarked from these vehicles.

Inside the museum there was plenty we could do to help school groups. But there was absolutely nothing we could do to change how long it took children to get on and off the coaches parked outside. The data would have been incredibly easy to collect, but also incredibly useless.

The moral of this tale — when conducting research focus on data that will be useful, rather than what is easy to collect. To quote Einstein’s favourite maxim:

“Not everything that counts can be counted. Not everything that can be counted counts”

The moral of the tale
A psychic mollusc, a coach-load of school children and a presidential candidate walk into a pub and the landlord, who just happened to work in audience research, said:
• Conduct a balanced assessment of your project. Look for evidence that things haven’t worked or aren’t going to plan, as well as evidence of success
• Ensure you’re hearing from a representative cross section of your audience, not just those who are easy to reach, or who are especially vocal
• Focus your time, resources and efforts on what is going to be useful. Don’t get distracted by the easy to do, but useless

If you’d like to find out more about the Literary Digest opinion poll and the case of Paul the psychic octopus I’d recommend the ever excellent BBC Radio 4 programme More Or Less.

--

--