Science deserves a gift this Christmas

Let’s make 2017 a memorable year for science

The end of every year is a time of reflection. There is a lot to reflect upon this season, from several big political events, to all the personal changes we might have experienced over the past year.

One of the things I’ve been reflecting on over the past several weeks is scientific peer review. This year I was engaged as a statistical advisor for Psychological Science, a role which involves (in part) acting as a routine peer reviewer with an eye toward the statistical methodologies reported in manuscripts. I am also one of the incoming editorial board members at Journal of Experimental Psychology: General, and we have been discussing matters of peer review policy.

I think many of us have come around to the view that the current peer review is flawed. Currently three (plus or minus two) people each give five (plus or minus five) hours of their time to give their approval (or not) to a particular version of a manuscript. The editor weighs their often disparate opinions and decides whether it deserves to be published. The current system is difficult, time-consuming, costly (because publishers run the show!), and unreliable.

At the root of the problem is the a confusion between science and publication. Science is a process: Tentative claims are put forward, debated, knocked down or bolstered, all continuously. Traditionally, publication is an event requiring a finality that does not exist in science. There must be a version of record, and that version must be evaluated. So we ask peer reviewers to evaluate, but due to time constraints we can only ask a small number of peer reviewers. Because publication is an event, peer review is an event: their word — in conjunction with the action editor’s decision — is final, because publication is final. This is a poor match for the scientific process.

One particular aspect of peer review that has me particularly worried is that we often accept less that we should in a manuscript, because the scientific culture allows it. Many manuscripts have limited information in the methods and results. Description of the materials is sparse, and often one would not be able to replicate an experiment from the description given in the manuscript. Description of the results is likewise sparse; researchers often provide few, if any, descriptive statistics or plots of the data, choosing instead to focus only on the inference of interest.

From the reviewer’s perspective, we are reviewing blind. Sure, the authors may report a significant correlation, but you didn’t show the data, so why should a reviewer trust the inference? There seems to be a corrupt cultural assumption of “good faith” in peer review, a sort of twisted Golden Rule: in peer review, Ask only for what you would like to be asked. I have heard this rule explicitly expressed with respect to asking for new experiments or sharing data, and it concerns me.

We do not like to make work for ourselves, and so we assume — conveniently — that others have done due diligence with their analysis, checking assumptions, making sure that the variables not presented do not yield evidence counter to the conclusions, and so forth. This is not always the case, and papers of low informational value are published. Because publication and peer review have a finality to them, people treat papers that have passed this hurdle with deference. Researchers, being people, then cite the low informational papers opportunistically, making theoretical process difficult.

But the guiding principle in scientific peer review should be skepticism, and proper skepticism requires information. How can we help reviewers be more appropriately skeptical? I propose some principles to enable skepticism in peer review to thrive.

  • Facts do not exist solely to serve the purposes of authors. These data represent facts about what happened in the experiment, and are critical to understanding the phenomenon in question. Currently, the culture of psychological science is one in which data from an experiment is used once: to serve the purpose of the authors of a particular manuscript. The facts surrounding the experiment, including data and materials, are treated as the property of an individual or lab who determines when and with whom they are shared, and are not even shared with reviewers. This aspect of psychological research culture is anti-scientific.
  • Ad hoc reviewers are entitled to review on their own terms. Reviewers are asked to review because of their scientific judgment, and the integrity of science depends on researchers exercising that judgment in peer review. Editorial policy can augment peer reviewers’ judgments but should not trump it, and eliminating a reviewer because of a disagreement over scientific judgment is ethically dubious.
  • Reviewers are entitled to information to the fullest extent possible. The peer review process should be skeptical by default, and hence peer reviewers are entitled to check the information behind the claims in a manuscript. Always assuming good faith and due diligence on the part of authors allows a corrupt scientific culture to develop where we scratch one another’s backs with low standards. Peer reviewers have both a right and a duty to resist this.
  • Reviewers are not special. Reviewers act as a temporary proxy for future readers, not as a panel with special rights. Future readers are entitled to apply the same level of skepticism as reviewers, and hence are entitled to the same information as the reviewers. The peer review process is not an event, it is a process, and hence future readers must be engaged in the process as well. The reliability of peer review will be substantially improved by viewing it as a continual process in which all readers engage.

Of course, these principles must be modulated in the presence of ethical concerns such as participant privacy.

What do these principles imply? Among other things, they imply that reviewers should have access to data, where possible. Whether, and how, they use that data is up to them; but they must be able to adequately check the claims in a manuscript. It is true that at the current unsustanable levels of publication, adequately assessing the data in all manuscripts we review is impossible because we publish far too much and far too readily in the current arms-race atmosphere. But reviewers have a right to the information nonetheless.

The principles also imply that data and materials sharing should be the norm, and that reviewers have the right to ask for this on behalf of future readers. Reviewers are the first outsiders to get a chance to be a part of the work represented in the manuscript but they should not be the last. Science is done by communities of researchers, not just individual labs, and reviewers act as representatives of that community. “Post-publication peer review” is a misnomer; it is simply peer review, correctly conceived.

Finally, the principles also imply a particular relationship between journal editorial policy and peer review that may seem controversial. In my view, the journal is a vestigial part of the scientific process, and, where necessary, should serve the community of peer reviewers, not vice versa. The peer review relationship, broadly conceived — that is, skeptical science — is fundamental, and journal policies or society “ethical” guidelines that come into conflict with this relationship run counter to good scientific culture and should be resisted.

These principles are what I had in mind when I developed the Peer Reviewers’ Openness Initiative (PRO). Signatories of the Initiative — over 300, currently — agree that they will request public data and materials sharing from authors of manuscripts they review, or a to describe in the manuscript the reasons why the authors cannot share the data or materials. If neither is forthcoming, then the reviewer will, regrettably, write a review focusing on the inadequacies of the manuscript with respect to data and materials.

The PRO Initiative, which officially begins January 1, 2017, recognizes the centrality of the peer reviewer relationship and the duty of reviewers to address the inadequacy of data and materials reporting in manuscripts; it protects the autonomy of authors by allowing them to give a reason why the data cannot be shared; and, it respects future readers’ right to access the data, or to assess the reason given in the manuscript why data are not shared. My coauthors and I hope that the PRO Initiative, along with other positive developments such as the TOP guidelines for journals and funding agencies, point toward a scientific future where transparency and openness are the norm.

But science is only what we make it; our own standards collectively determine the scientific community’s standards. It is my sincere hope that 2017 represents a banner year for openness in science, but will only be one if we act. Please join us in the PRO Initiative and advocate the TOP guidelines at journals where you have review or edit. Together, we can give science the best year it’s had in a long time, but only if we act.