What science reporters should know about meta-analyses before covering them

Jop de Vrieze
11 min readJul 9, 2019

As science journalists who take our job seriously, we’ve learned a couple of rules by heart: never present a correlation as a causation, always check whether a sample is representative and never rely on a single study. As the expression goes: one swallow doesn’t make a summer.

These are all good starting points. But they are far from making us unimpeachable in our reporting.

As a result of the third principle, we tend to rely on review studies. More specifically: systematic reviews and meta-analyses. A systematic review is, simply said, a review of the scientific literature on a particular research question, performed in a systematic way to reduce bias. Sometimes a reviewer will just include randomized clinical trials, sometimes other studies as well. A meta-analysis is a statistical method to combine the results of several studies and come to a single result and conclusion. It is often the final piece of a systematic review.

But there is something strange going on here. While we at least try to scrutinize the methods and limitations of all those single studies, we rarely do the same with systematic reviews and meta-analyses. Since they are regarded as the gold standard of empirical science, on top of the ‘pyramid of evidence’, we take the results and conclusions for granted and regard them as objective debate enders.

--

--

Jop de Vrieze

Freelance science writer based in Amsterdam, Science Magazine, New Scientist and Dutch media. Using science to critically reflect on society and vice versa.