Publication Bias and Negative Results

The Impact on Truth-Seeking in Science

CHI KT Platform
KnowledgeNudge
4 min readNov 23, 2016

--

By Gwenyth Brockman

Publication bias

Publication bias and negative results

Publication bias occurs when results found in published research systematically differ from the results of all studies conducted on a given topic. It embodies the tendency that scientists, reviewers and editors have to favour statistically significant, positive results in academic papers. Studies with null results are seldom published (let alone even written) — a phenomenon known as the “file drawer effect”.

This bias is problematic for several reasons. For one, not sharing negative results is detrimental to the inherent collaborative nature of science and can lead to a waste of time and money for other scientists who may be on the same path. Publication bias is a form of research waste, which Kate touched on in an earlier post. Perhaps of greater concern, is that what does become public, readily-available research can represent a skewed and misleading version of the truth.

One of the four main components of knowledge translation as we define it here at KnowledgeNudge is knowledge synthesis, which involves compiling previously conducted (typically published) studies. Drawing conclusions from a skewed sample can lead to misinformed solutions and interventions down the road. As Ben Goldacre notes in his TED Talk, evidence-based medicine and practice are threatened by the custom of withholding negative or statistically insignificant results. Publication bias in clinical trials decreases a physician’s ability to make an informed decision about prescriptions. Conducting a trial several times, but only publishing the one occasion that yielded positive results, is a very different story than if a trial succeeded on the first attempt.

How did we get here?

On one hand, researchers may perceive negative results as personal failures, failed experiments, or threats to their reputation; preferring to hide them rather than share them with the academic community. Further, the number of publications listed on a researcher’s CV (particularly those in journals with high impact factors) can impact a researcher’s perceived value and contribution to their field, and by extension, their funding opportunities and job security.

On the other hand, to the surprise of no one, journals are far less likely to publish negative results (20% of studies with null results are published) than positive ones (of which 61% are published). As a BMJ editorialist put it (way back in 1987), “negative results have never made riveting reading” and therefore don’t sell the way that more attractive big findings might. And when they are published, negative results tend to appear in journals with lower impact factors.

There are concerns that this artificial scarcity of opportunities to publish in high ranking journals creates unfair pressure on researchers to generate positive results, which may force them to: rush to publish before results are ready, fail to control for scientific bias, or HARK (hypothesize after results are known). Perhaps unsurprisingly, this artificial scarcity is not without its critics, who contend space limitations shouldn’t be an issue for journals in a digital era.

Replication studies are important. And often unpublished.

Another aspect of publication bias is that journals do not usually publish studies that replicate or refute another’s work. A recent story by Vox News asked 270 researchers what they thought was “wrong with science today.” Among the top responses was “replicating results is crucial — but scientists rarely do it.” Replication is an important aspect of scientific rigour, helping to ascertain whether the same results are attainable within a different population, or under different conditions. Encouraging replication allows for the development of generalizable statements, and essentially brings us closer to the truth. And as identified as one of Bradford Hill’s nine criteria for causation in medical diagnosis, demonstrating consistency means numerous trials must be done before it is possible to imply a causal association — underscoring the importance of publishing replication studies.

Movements for change

The idea of publishing null results is still contentious. Some argue it is pointless to publish on every experiment that yields null results, while others believe it to be a positive step towards more transparency in research and the scientific process. In an attempt to promote more transparency in clinical trials, the WHO’s position statement calls for a registry pre-analysis that requires registered trials to publish any and all findings within 12 months. Additionally, some new journals dedicated to reviewing and publishing studies with negative results have begun to emerge, such as New Negatives in Plant Science in 2014.

What we do know is that we need to re-examine the way we value scientific processes and the way researchers are rewarded for their work. Like any investment, science is not without risk — it is not always possible to know if an endeavour will pay off. But there’s value in knowing what others have learned through their own methodologically sound research, whatever the results they show. Unfortunately, with limited opportunities (and motivation) to publish such studies, most of them languish in bottom drawers and forgotten warehoused boxes.

About the author

Gwenyth Brockman is a former Research Assistant with the George and Fay Yee Centre for Healthcare Innovation (CHI) in the Knowledge Translation platform.

--

--

CHI KT Platform
KnowledgeNudge

Know-do gaps. Integrated KT. Patient & public engagement. KT research. Multimedia tools & dissemination. And the occasional puppy.