Open Science is About the Outcomes, Content, and Process of Research

Sage Bionetworks
CAOS by Sage Bionetworks
3 min readMar 24, 2019

By Brian Nosek

Photo by Maarten van den Heuvel on Unsplash

Much of the open science movement focuses on improving transparency and accessibility of the contents and outputs of research — papers, data, code, and research materials. The benefits of increasing openness of such resources to advancing science is easy to comprehend.

For example, if the literature were more accessible, then science would be more inclusive of the people who otherwise would not be able to read the literature. They could be better informed about potential applications in scholarly work, commercial development, and translation into policy or practice. Likewise, having open data and materials makes it easier to demonstrate that reanalysis of data reproduces the reported findings, and to reuse data and materials for novel research applications. However, the narrow focus of open science on contents and outputs of research misses one of the most critical rationales for open science — improving the credibility of published findings.

In a recent paper from my lab, we reported evidence that reducing biases in decision-making may require explicitly pointing out the potential source of bias (Axt, Casola, & Nosek, 2019 [open version]). You can visit the Open Science Framework (OSF) and obtain the original code and data for the experiments and verify the outcomes we reported in the paper. That’s good. But, is that sufficient for the claims to be credible? No. Just being able to reanalyze the data does not give you the insight that you need to effectively evaluate the credibility of the research.

For example, what if we had run 50 experiments on the problem and reported only the five that “worked”? And, what if we had analyzed the data in a variety of ways and reported only the outcomes that generated the most interesting findings? Because of our outstanding ability to rationalize behavior when it is in our self-interest, we may not even recognize ourselves when we are dismissing a negative result as a flawed experimental design rather than counterevidence to our claims. Open data and code don’t help you (or me) identify these potential reporting biases.

The solution is open science — but open science about the process of conducting the research. For you to effectively evaluate the credibility of my findings, you need to be able to see the lifecycle of how those findings came to be. If I can show you that the analyses we reported were planned in advance rather than post-hoc, you will be more confident in interpreting the statistical inferences. If I can show you all the studies that were conducted as part of the research, regardless of whether they made it into the final paper, you can assess any reporting biases that may have occurred. Opening the lifecycle of research facilitates the self-corrective processes that we lionize as critical for scientific progress.

The primary mechanisms for improving research credibility by opening the research process are registration of studies and preregistration of analysis plans. Registration of studies ensures that all studies conducted are discoverable, regardless of publication status. Preregistration of analysis plans clarifies the distinction between confirmatory (hypothesis testing) and exploratory (hypothesis generating) outcomes. Confusing the two is one of the key contributors to irreproducibility and low credibility of published findings.

Registration is well known, and still improving, in clinical trials research. But, it is still rare in pre-clinical and basic research across disciplines. That is changing. For example, since OSF’s launch in 2012, the number of registrations has approximately doubled each year. Now, there is in excess of 20,000 — almost all of it for basic research applications. That is still a drop in the bucket against the yearly volume of research. But, the promising trajectory coupled with a reform-minded community pushing for improving transparency and credibility of research is a positive indicator of continuing improvement in making accessible not just the outcomes and content of research, but the process of discovery too.

About: Brian Nosek is co-Founder and Executive Director of the Center for Open Science (http://cos.io/) that operates the Open Science Framework (http://osf.io/). COS is enabling open and reproducible research practices worldwide. Brian is also a Professor in the Department of Psychology at the University of Virginia. He received his Ph.D. from Yale University in 2002.

Originally published at sagebionetworks.org. This is part of the series: Voices From the Open Science Movement.

--

--

Sage Bionetworks
CAOS by Sage Bionetworks

We develop and apply open practices to data-driven research for the advancement of human health. We are a nonprofit based in Seattle. Visit sagebionetworks.org.