Public and private benefits to practicing open science

Code Ocean
Code Ocean
Published in
4 min readDec 21, 2018

--

Computational reproducibility as a public goods problem

When we say that code and data accompanying an article are ‘computationally reproducible,’ we typically mean that a reader can reproduce “particular analysis outcomes from the same data set using the same code and software” (Fidler and Woodcox 2018). Though Hardwicke et. al (2018) call this a “minimum level of credibility we would expect of all published findings,” a majority of research does not meet this standard. Most code and data are simply not available (Hardwicke and Ioannidis 2018), and mandatory data policies don’t necessarily change this (Stodden et al. 2018). When data are available, as Hardwicke et. al (2018) find, “suboptimal data curation, unclear analysis specification and reporting errors” mean that reproducing results can be time-consuming and painstaking, and may not be possible at all.

Observation: credible research is a public good, so perhaps its scarcity should be no surprise. If we take this idea seriously, we might approach the credibility revolution as a collective action problem, and look for coordinating mechanisms that align public and private incentives. Journal policies (Tannebaum et al. 2018) and funder directives (Wykstra 2017) can certainly help; but another fruitful avenue, I hope, is to stress the benefits that a researcher can expect to internalize when making her work reproducible. Donoho (2017), points in this direction when arguing that “beginning with a plan for sharing code and data leads to higher quality work, and ensures that authors can access their own former work, and those of their co-authors, students and postdocs.”

Reproducible research = accessible research (to your future self)

I learned first-hand of one such benefit when I prepared to publish code and data for The contact hypothesis re-evaluated, a BPP article on which I am a co-author (along with Betsy Levy Paluck and Donald P. Green). I uploaded our materials to Code Ocean, where I work as Developer Advocate, as an online-executable compendium titled The contact hypothesis re-evaluated: code and data. This ‘compute capsule’ has gone through five iterations, in which, among other things, I reconfigured the computational environment to run both R and Stata, expanded the metadata (adding a codebook to our dataset, for instance), and fixed a few small mistakes.

https://codeocean.com/2018/10/16/the-contact-hypothesis-re-evaluated-colon-code-and-data/code

All of this took time. For this paper in particular, in which we assess the credibility and policy-relevance of the contact hypothesis literature, this was time well-spent, because our analytic choices are crucial to our claims. But I learned an additional, important, and private benefit of having code and data online and fully reproducible. I found that I could easily make one change to one aspect of an analysis pipeline, and quickly see exactly how that affected the entire workflow. In short, when code and data are accessible to everyone, ‘everyone’ includes me — or more specifically, all my future selves who might wish to tinker with the project. Having my work in a reproducible state transforms doing so from a chore to a pleasure.

One might call this a specialized case of continuous integration, the software engineering practice of continuously deploying changes to a central repository. Moreover, it’s a non-trivial, personal gain from practicing open science. I hope that as such benefits become more widely known, we can increasingly expect research to be credible by default.

For related content, see:

https://tomhardwicke.netlify.com/blog/psychology-reproducibility/

https://tomhardwicke.netlify.com/blog/attrition-scholarly-record/

https://medium.com/codeocean/stata-on-code-ocean-the-case-of-meta-ado-ac9c32be338a

https://medium.com/codeocean/five-reproducibility-lessons-from-a-year-of-reviewing-compute-capsules-de71729ebd8a

For further discussion, read ‘The contact hypothesis re-evaluated’ by Elizabeth Levy Paluck, Seth A. Green and Donald P. Green, open access.

Citations:

Chang, A. C., & Li, P. (2018). Is Economics Research Replicable? Sixty Published Papers from Thirteen Journals Say “Often Not”. Critical Finance Review, 7.

Clyburne-Sherin, A. & Green, S. A., (2018). Computational Reproducibility via Containers in Social Psychology.

Donoho, D. (2017). 50 years of data science. Journal of Computational and Graphical Statistics, 26(4), 745–766.

Fidler, F. & Wilcox, J. “Reproducibility of Scientific Results”, The Stanford Encyclopedia of Philosophy (Winter 2018 Edition), Edward N. Zalta (ed.), forthcoming URL = <https://plato.stanford.edu/archives/win2018/entries/scientific-reproducibility/>.

Hardwicke, T. E., & Ioannidis, J. P. (2018). Populating the Data Ark: An attempt to retrieve, preserve, and liberate data from the most highly-cited psychology and psychiatry articles. PloS one, 13(8), e0201856.

Hardwicke, T. E., Mathur, M., MacDonald, K., Nilsonne, G., Banks, G. C., Kidwell, M. C., … & Lenne, R. L. (2018). Data availability, reusability, and analytic reproducibility: Evaluating the impact of a mandatory open data policy at the journal Cognition.

Stodden, V., Seiler, J., & Ma, Z. (2018). An empirical analysis of journal policy effectiveness for computational reproducibility. Proceedings of the National Academy of Sciences, 115(11), 2584–2589.

Tannenbaum S, Ross JS, Krumholz HM, Desai NR, Ritchie JD, Lehman R, et al. (2018, July 17). Early Experiences With Journal Data Sharing Policies: A Survey of Published Clinical Trial Investigators. Ann Intern Med. ;169:586–588. doi: 10.7326/M18–0723

Wood, B., Müller, R., & Brown, A. N. (2018, June 20). Push button replication: Is impact evaluation evidence for international development verifiable?. https://doi.org/10.31219/osf.io/n7a4d

Wykstra, Stephanie (2017): Funder Data-Sharing Policies: Overview and Recommendations. figshare. Paper.

Seth Green is the Developer Advocate for Code Ocean. He helps authors publish their code on the platform and tries to represent researchers’ points of view within Code Ocean. He spent a few years in a political science PhD program before joining Code Ocean. Find him on twitter @setgree

Originally published at blog.journals.cambridge.org on December 21, 2018.

--

--