© Dale Crosby Close for Mosaic

How to fact check

We thoroughly fact check every story we run, but why do we do it when others don’t? Rob Reddick explains Mosaic’s process.

Mosaic
Mosaic science
Published in
6 min readJul 16, 2019

--

OK, pop quiz. Which of the following statements are true?

  • Wind turbines cause cancer.
  • Diabetes is a listed side-effect of the MMR vaccine.
  • A young person in England gets diagnosed with gonorrhoea or chlamydia every four minutes.

Answering the first was probably straightforward. The second maybe less so. And the third — it’s harder to be sure on that one, right?

It turns out they’re all false, though to varying degrees. (See the end for more details.)

This is why fact checking is worthwhile; sometimes fake news is easy to spot, sometimes it’s harder. We take on this hard work because we want readers and republishers to trust our stories and come back to us again and again.

False information also erodes public trust in science, and that’s the last thing the world needs right now. We have a duty to report science accurately, to avoid this loss of faith and give research the best chance of having a positive impact.

There’s a self-interested reason for fact checking too: spreading misinformation risks harming our reputation.

How we fact check

Fact checking is labour intensive, so we hire freelance fact checkers to do it for us. We have three or four that we work with regularly.

But this doesn’t mean there’s nothing for us to do. Editor discretion is often required when resolving queries — we may need to get in touch with a difficult-to-contact or vulnerable interviewee, work out how to reconcile or choose between conflicting sources, or make a cut if there’s something we just can’t verify.

Over the years, our regular freelancers have helped us develop some general principles to guide how we approach the main types of sources.

Interviews

We ask writers to record all of their interviews, and ideally provide transcripts. We then ask our fact checkers to match any quoted material to the transcripts or audio.

If there aren’t any transcripts and the tapes are long, there’s a judgement call to make — getting a fact checker to listen to hours of recordings is expensive; it might be easier to get the writer to help locate specific quotes in the recordings.

As a last resort we take the quotes back to the interviewee to be confirmed. People tend to want to tweak what they’ve said, so we try to avoid this if possible. Exceptions are when we don’t have any record of a quote, a quote contains a fact that we can’t verify with an external source, or there’s something in the interview that isn’t clear.

Our pieces often paraphrase material from interviews, and we don’t run these paraphrases past interviewees — unless they contain facts we can’t otherwise verify or we have queries about what was said.

Written sources

High-quality recent sources are the gold standard for confirming facts. Examples include papers from reputable journals, data from government or academic databases (eg the Institute for Health Metrics and Evaluation) and press releases from reliable sources, such as universities or the NHS.

If these aren’t available, then multiple lesser or older sources can suffice, but only if the origin of a fact is clear. Newspaper articles, news reports and even government publications can’t be relied on if the origins of their facts aren’t visible. If they are, it’s often possible to go to their source anyway.

One of our regular fact checkers advises running general searches at the start to identify some key robust sources. “This saves time later, as a lot of basic facts are quickly checkable,” she says. “It also helps check balance/omissions.”

Visual sources

Writers sometimes provide photographs or films from their reporting. On rare occasions, these can be useful to confirm the appearances of things (for example for our article on 3D printing prosthetic hands).

When the going gets tough

Certain areas are notoriously tricky.

One is scientific data. Different studies often throw out widely different measures of things, and it’s common for data sets not to be directly comparable. Sometimes a study is small and unique. These aren’t insurmountable problems, but they’re barriers to being able to offer definitive numbers for things. It means scientific stats often come with caveats that are hard to explain clearly and concisely. As a general rule, unless we’re reporting on a specific study, a single scientific paper shouldn’t be used alone to confirm a fact or provide a figure.

And stats in general are often a stumbling block. Sometimes a reputable-looking secondary source gives a certain stat, but it’s not clear whether it really means what the writer takes it to mean, or even what the source takes it to mean. Widely quoted figures can also be nightmarish: it’s tough to find out where they came from originally, and they may be years out of date.

There are also plenty of traps to fall into. Elimination and eradication aren’t the same thing. Neither are sensitivity and specificity, incidence and prevalence, mortality rates and case fatality rates. When trying to write for a non-scientific audience, these can get confused.

The other difficult area is interviews — for several reasons.

When part of a story is reported from someone’s memory, sometimes that’ll be the only source available. How do you prove that what’s said actually happened? Memory can be unreliable.

There’s also no broad consensus on how much editing a writer should do with their interviewee’s quotes. Almost all do some tidying — at the very least to get rid of ‘umms’ and ‘errs’ — but unless you forbid cleaning up completely, it’s very difficult to create an absolute measure of what’s acceptable and what’s not. It’s a call we have to make on an ad hoc basis.

Despite going through the fact-check process for every story, errors still slip through. It’s frustrating when this happens, but in the spirit of accuracy, we try to be as transparent as we can when we make mistakes. When we need to make amendments to our stories, we note them at the foot of the page.

These are a sign that we’ve got something wrong. But they’re also a reminder that we’ve got good processes in place.

Quiz answers

  • Wind turbines cause cancer.This statement from President Trump is — unsurprisingly — a load of hot air.
  • Diabetes is a listed side-effect of the MMR vaccine.” Also fake news. Diabetes is listed as an adverse reaction to the MMR II vaccine used in the USA, but this doesn’t mean the same thing.
  • A young person in England gets diagnosed with gonorrhoea or chlamydia every four minutes.” Close, but not wholly accurate. The four-minute figure comes from dividing the minutes in a year by the number of diagnoses for these STIs in young people (144,000).
    But the number of diagnoses isn’t the same as the number of people diagnosed — some people might be diagnosed with both STIs at the same time and/or be diagnosed multiple times throughout the year, meaning the number of people diagnosed will be lower.
    “Every X minutes” also suggests that diagnoses occur evenly throughout the day, but this seems unlikely given that not all testing services operate 24/7 and that the UK population is vastly more active during the day than the night. Expressing this data this way without qualification is questionable.

To learn more about how Mosaic works, check out the rest of the posts in this series.

If you’d like to pitch to Mosaic, read our guidelines and drop us an email.

--

--

Mosaic
Mosaic science

Stories about the science you care about in a changing world. Available for free, created by @wellcometrust