Open research: why publication of negative results is crucial to further scientific discovery

f64901nt
Open Knowledge in HE
10 min readAug 12, 2019

Introduction

When considering strategies for open research in higher education, many highlight the importance of open access publication and the removal of language barriers. Whilst the open access movement is an essential step on the path towards making research more visible, it does not address issues concerning bias in the scientific literature. In my opinion, in order for research communications to be truly open, they must facilitate transparency, collaboration and efficiency through the publication of an authentic research journey. This is not only beneficial for the higher education research community but for industry and the general public as a whole.

Scientists usually publish their research findings in academic journals, detailing their accounts of successfully proven hypotheses and experimental detail. However, what we see in the literature isn’t an accurate reflection of the results actually achieved through scientific experimentation. Studies with positive results i.e. results that agree with the researcher’s hypothesis, are far better represented in the literature than studies producing negative results. The over-representation of ‘successful’ stories produces so-called publication bias and presents serious questions regarding the integrity of the literature.

Publication bias is driven by the misplaced value of ‘successful and productive’ studies as more interesting and attention grabbing, by editors, publishers and ultimately, readers. This perception is evidenced by the fact that positive studies are more heavily cited, particularly in medical research. According to an investigation conducted in 2011, in which greater than 4600 publications from different countries and disciplines were assessed, a steady and significant increase in publication bias was reported. Between the years 1990 to 2007, publication of positive results grew by over 22%, with psychology and psychiatry among the disciplines in which this increase was highest. Ultimately, this means that the majority of experimental data are never publically reported and for the most part, this is due to the results being ‘negative’ i.e. the expected outcome was not observed.

Causes of publication bias

Publication bias within the scientific literature is a widely recognised problem caused by a number of factors:

1. Publish or perish — The highly competitive funding landscape and opportunities for career progression mean that researchers do not submit negative results for publication, thinking that journals will reject their papers. A scientist’s success depends on the impact of their research. Higher impact findings published in high profile journals tend to attract more funding and recognition.

2. Editors prefer positive results — High quality journals are generally less likely to accept negative results, as they are associated with a lower citation count and therefore, a lower impact factor.

3. Obstruction by stakeholders — Study sponsors maybe biased towards the dissemination of results that favour their interests. It has been reported that industry funded studies have led to a greater number of positive results compared to studies funded or conducted by independent organisations. This is particularly problematic for clinical investigations.

4. Distortion of data for better results — In a practice known as HARKing: Hypothesizing After the Results are Known, researchers can focus on the positive rather than negative results of their study. In doing so, scientists have been known to modify their original hypothesis to better suit their data.

5. Human error — Science cannot always be reproduced. A survey conducted by the journal Nature revealed that more than 70% of researchers have tried and failed to reproduce another scientists experiments and more than half have failed to reproduce their own. The issue of reproducibility in science has led to the belief that negative results are associated with poor quality, flawed science.

6. Dead ends are more likely — Negative results from scientific experiments are far more common than a positive outcome, particularly in early stage research. This presents researchers with a huge amount of data to spend valuable time and resources following up. A report from the University of Carolina about publishing negative data, caused a flurry of social media activity in response. Whilst many researchers thought the issue was important, some weren’t so keen on the idea. Scientist Peter Dudek said via Twitter: “If I chronicled all my negative results during my studies, the thesis would have been 20,000 pages instead of 200.”

7. Human nature — It’s normal for researchers to want to share their stories of success not failure. Given that disappointment is more likely than discovery, researchers want to shout about it when it actually does happen.

8. Confliction with previous publications — It is particularly difficult for scientists to publish negative data if the findings contradict previously published research, despite many reputable journals implementing policies to publish such work.

For a more detailed discussion on the causes of publication bias, see reports from the University of East Anglia and University Hospital Centre Zagreb.

Are negative results meaningless?

One of the most common misconceptions among scientists is that negative results are derived from flawed, poorly constructed hypotheses and experiments. This argument is easy to address. The quality of positive results is often questioned. There have been various stories where positive data have been the subject of investigations into fraud, fabrication and misconduct. Examples include Marc Hauser, Hwang Woo-suk and Henk Buck to name a few. Others argue that negative data ‘cannot be trusted’. However, the 2013 article ‘Trouble at the lab’ published in The Economist revealed that negative data are statistically more trustworthy than positive data.

Give that negative outcomes are seldom reported, it’s no wonder the research community questions the value of these data. Is the publication of negative results pointless, insignificant or just not useful? Or, is there some value in sharing these data with the broader scientific community? To answer these questions, we can look to history for examples of when negative data has led to great discovery and sometimes, paradigm shifts in the way in which science takes place.

A classic example of negative results leading to a transformation in scientific experimentation is the work conducted by 19th century physicists, Michelson and Morley. They conducted a series of experiments in the 1880’s seeking to investigate the presence and properties of aether, a substance believed to fill empty space. Their research clearly contradicted the prevalent theory at the time, causing the scientific community to continually overlook them. It was only when they eventually published their findings in the American Journal of Science in 1881 that the prevailing theory was questioned. As a result, their work inspired a new line of research that ultimately led to Einstein’s special theory of relativity in 1905. Besides the obvious impact on one of the most trailblazing scientific discoveries ever made, Michelson and Morley’s research caused an existing theory to be questioned and ultimately corrected. This highlights the importance of disseminating negative data to ensure that theories that are untrue or incomplete are further investigated.

Of course, not all negative data will turn out to be of ground-breaking significance. However, examples like Michelson and Morley show that sharing negative results can inspire new directions for future studies and should not immediately be associated with unskilled scientists and poor experimental conduct.

The problem with publication bias

The fact that negative results are, for the most part, ignored is hugely problematic for both the integrity and advancement of scientific research. Positive study findings dominate the literature, meaning the conversation is skewed in favour of reporting only selected pieces of information. Since fewer negative results are published, inaccurate representations of particular research fields are described. This, for example, can lead to an overestimation of the efficacy of new treatments, new devices, or social policies and an underestimation of their risks and potential disadvantages. Perhaps more worryingly in the medical field, reporting bias can result in the non-publication of investigations that find treatments to be harmful to patients. In these studies, human subjects have given their informed consent for participating in an experiment with the promise that the research is being conducted to benefit others and to contribute to scientific advancement. In these cases, it is simply unethical not to publish negative data.

For aspiring and early career researchers who use the literature to inspire and inform their studies, the dominance of success stories can be incredibly off-putting and discouraging. If the publication landscape doesn’t enable scientist’s to tell their authentic research journey, it’s almost impossible for junior researchers to develop a realistic understanding of the challenges associated with scientific research and discovery.

It’s not only early career researchers who suffer. The publish or perish mentality means that all researchers are pressured into publishing their data in high profile academic journals, which demand novel and surprising results. This is often stipulated in employment contracts in which researchers are required to publish a certain number of papers per year in particular journals. If they meet these performance indicators, their career advances, and if not, their contract could be terminated. This type of policy encourages scientific misconduct as described by an Assistant Professor at Utrecht University (Netherlands) in The Guardian. According to this report, a staggering 14% of scientists claim to know a scientist who has fabricated entire datasets, and 72% say they know one who has indulged in other questionable research practices such as dropping selected data points to sharpen their results.

The non-publication of negative findings causes problems for the research funding landscape. It means that non-productive or flawed hypotheses and experimental frameworks may continue to receive financial support from agencies, causing precious and valuable funds to be diverted from potentially more fruitful projects. As a result of this, funding continues to support ideas, which look good on paper but do not come to fruition. Unfortunately, this issue is repeated by funding agencies across the world. If the initial failure had been reported, funders could channel that money into other projects.

The most obvious problem with publication bias concerns the invested time, (often public) money and resources that are wasted if a study remains unreported. If the scientists who conducted the investigation formulated a hypothesis that was worth exploring, there is a high chance that someone else within the field has had the same or similar idea. Therefore, not reporting a negative result can waste other researcher’s time and money conducting a series of experiments that will presumably also produce negative data. This leads to unnecessary repetition, when instead, resources could be channelled to more productive avenues for investigation that might lead to an increased chance of success.

Steps towards change

Over the last few years, the topic of publication bias has received significant attention from researchers, funders and journal editors. The conversation continues to grow, with many now arguing that experiments shouldn’t have to show positive results to earn their place in the literature. Publication of negative or unexpected data will present a more balanced view of the research landscape with the potential to make a significant contribution scientific advancement.

Figures adapted from https://www.elsevier.com/authors-update/story/innovation-in-publishing/whyscience-needs-to-publish-negative-results, icons by Adioma.

Fortunately, there is now a movement, which is gaining momentum to counter publication bias and all of its negative consequences. In 2015, the World Health Organisation (WHO) published their intervention, calling for the main results identified in clinical trials to be submitted for publication in an open access peer reviewed journals within 12 months of study completion. In addition, they called for all previously unreported results, including negative results to be published.

There are now a number of journals, solely dedicated to the publication of negative results, such as the Journal of Negative Results, Journal of Negative Results in BioMedicine, New Negatives in Plant Science (now discontinued but published content will still remain available) and the emerging All Results journals. Other journals accept the submission of negative data, including PLOS ONE and BMC Psychology. Whilst these important initiatives are a step in the right direction, the journals and articles that report on negative findings are not highly utilised or cited often. This begs the question: are there alternative publication strategies that would be more beneficial to the research community?

Can alternative publication strategies facilitate open research communication?

The initiatives previously described undoubtedly shine a light on, and make good progress towards, greater openness in scientific communication. The publication of negative data is widely acknowledged as the right thing to do. However, the reality is that still it is seldom achieved, despite the existence of journals created entirely for that purpose.

What if the dissemination of negative data is better done outside of the academic publication framework? To meet the requirements of academic publishers, it would take a significant amount of researcher time and resources to be absolutely confident in the negative answer. After all, dead ends are part and parcel of early stage research, making up about 80% of lab results. One of the main drivers behind the negative results agenda is to prevent researchers wasting time and resources repeating unsuccessful experiments. Could a publically curated database of negative results be a simple solution to record ‘failed’ experiments?

As a former researcher in both academic and industrial laboratories, colleague’s lab books would always be the first port of call to understand the ins and outs of a particular experiment. Whilst the academic literature was an invaluable source of inspiration and ideas for my research, searching an in-house database of all previously conducted experiments was without doubt the most helpful reference. Often, knowing what not to do is most useful in terms of directing scientific research.

There are various blogs run by scientific researchers for scientific researchers, which challenge literature studies. Examples include Blog Syn, an organic chemistry discussion forum and Pubpeer, which allows readers to post comments about any article in an anonymous fashion. The comments are also sent to the authors to provide further input. These forums provide an opportunity to not only record negative results but also a chance for the research community to share ideas to accelerate scientific discovery. It’s important to note that dissemination using the internet has its drawbacks. Unlike the academic publication system, online reports are not regulated or peer reviewed. However, as with any form of research communication, integrity, honesty and professionalism are expected.

Conclusion

The culture surrounding the openness of academic publishing is shifting, particularly with respect to the dissemination of negative data. There are a variety of reasons why culture change is required, which benefit both the research and the researchers. Academic journals have made some progress towards addressing this issue and alternative publication strategies have already started to emerge. Whilst this movement is positive, ultimately it’s the research community who must drive change by shifting the way in which they view negative data. A recent article in The Conversation said ‘As proposed by American physicist and philosopher Thomas Kuhn, a shift in scientific thinking will occur when the amount of evidence in support of the new paradigm overtakes the old one.’ Using this logic, the answer may lie in educating the scientists of the future so that they experience improved research communications and appreciate the value and utility of negative data.

--

--