The positive bias is killing science

ResearchProof
ResearchProof
Published in
2 min readMar 6, 2018

--

Scientists are ultimately judged by the research that they publish and to have prestigious publications, scientists need exciting, novel and groundbreaking results (positive results or statistically significant). This so called positive bias has staggering consequences:

1. The disappearance of negative results: In today’s science system, results that do not support a hypothesis or that are similar to others results, have no incentive to be published. However, are such results indeed meaningless and negative? Science evolves according to testability, which might result in refutations or confirmations: both are needed to advance science. Therefore today there is a tone of scientific research which is not available, leading to a scarce exploitation of these results and to pointless replications.

2. The p-hacking: the practise by which researchers, often unconsciously, hype up the results by focussing only on the hypothesis that have statistically significant results. Test it yourself with this example by fivethirtyeight to experience how subtle and tempting P-hacking is. The evidence from such studies is somehow forced to look more positive that it actually is, therefore leading to wrong conclusions.

“Most of the research is wasted usually because it asks the wrong questions, is badly designed, not published or poorly reported.”

During the last years there have been several efforts to overcome this bias, however the current scientific system still rewards scientists based only on their positive findings. The result of this system is the accumulation of inaccurate scientific knowledge. A recent study states that “the cumulative (total) prevalence of irreproducible preclinical research exceeds 50%, resulting in approximately US $28B/year spent on preclinical research that is not reproducible — in the United States alone.” As suggested in others studies, most of the research is wasted usually because it asks the wrong questions, is badly designed, not published or poorly reported.

To foster a better research design and discourage the pursuit of positive results at ResearchProof we believe that science must rethink the rewards system and bring in more transparency and rigour into the research process. That’s why we are developing a tool to simplify and encourage the adoption of Open Science best practises already during the research process.

Stay tuned for more updates!

--

--

ResearchProof
ResearchProof

Blockchain and cryptography Open-access platform to foster sharing of preliminary, negative and single results in science