On Peer Reviews

Fixing the gatekeepers of sciences with transparency

Sanghyun Baek
Pluto Labs
11 min readMar 12, 2019

--

[Pluto Series] #0 — Academia, Structurally Fxxked Up
[Pluto Series] #1 — Research, the Knowledge Creating Industry
[Pluto Series] #2 — Academia, Publishing, and Scholarly Communication
[Pluto Series] #3 — Publish, but really Perish?
[Pluto Series] #4 — Publish or Perish, and Lost in Vain
[Pluto Series] #5 — On Where they Publish
[Pluto Series] #6 — On Number of Publications
[Pluto Series] #7 — On Citation Fundamentals
[Pluto Series] #8 — On Citing Practices
[Pluto Series] #9 — On Tracking Citations
[Pluto Series] #10 — On Peer Reviews
[Pluto Series] #11 — Ending the Series

Last three posts had addressed some points on citations, focusing on building better incentive structure for academia. Citation is per se act of giving credit, thus has significant importance. But more importantly, it was dealt with more in-depth, three separate posts because the citation counts are used as evaluative metrics on aggregate levels. For citations to work well, it was noted that peer reviews should as well function appropriately. It is because citations have no universal ground rules, and also because peer review is the evaluative method that best fits diverse and contextual decisions in scholarly communication. In this sense, peer reviews are sometimes referred to as “Gatekeepers of Science”.

With this post, some examples will be given on how these “gatekeepers are keeping the gates open.” The problems will then be explained on how they relate to the incentive structure of peer reviews. Transparency will be described as the cure to these incentive problems, and conclusions will be given addressing why these are important.

peer reviews are “gatekeepers” of science, source: Roméo A., Unsplash

Peer review is, as most readers may know already, a process where scientific works undergo verification and evaluation by the experts in the same fields, or peers. In general, peer review often refers to journal peer review, the one undertaken in journal publishing where submitted manuscripts are reviewed for official publication. More broadly it can also refer to peer (group) evaluations done for the socio-cultural and administrative decision making in academic institutes. This post will focus on the former.

In journal publishing, peer reviews verify the scientific validity of intellectual contents in the submitted manuscripts. Specifically, the reviews usually include recommendations to accept/reject/revise the manuscript. Editors of the journals would then make final decisions on publication upon these recommendations. Scientifically valid, however, doesn’t necessarily mean publication of the manuscript, but also considered are originality of the work or conformity to journal’s position and audience. Thus, peer review is not just a process of verification and validation but also of contextual evaluation.

Moreover, peer review doesn’t end up in evaluating the works, but it also corrects and improves them. Often the reviewers not just give the recommendations in binary forms, either accept or reject, but they give detailed comments on how to improve the submitted work, recommending revisions. In case of these revisions, peer review reaches more than gatekeeper roles, contributing to the creation and enhancement of knowledge. In short, peer review keeps the corpus of knowledge more robust in academia as a whole.

Gatekeepers Fail

Despite those significant roles, peer review has been criticized for decades. Among the notable criticisms are when these gatekeepers keep the gates open. Some journals had been accused of manipulated reviews and fake reviewer identities, or even manipulation by editors! Papers are being retracted for inappropriate peer review process, and the numbers are increasing. There’s even a website called “Retraction Watch” where they continuously report retractions of research papers. Feb 2018 had an interesting occasion where a study denying the value of peer review has been published with some serious scientific errors, which ironically indicates that peer review have failed to correct its errors.

Beyond authors, reviewers, or editors intentionally cheating the system, the system of peer review itself often fails to catch the errors. As introduced in one of our early posts, even the top notch journals like Science and Nature sometimes fails in peer-reviewing the manuscripts. “Sting operations” are as well famous examples. Academics had “stung” some journals to see if their review processes actually worked by submitting manuscripts with intended errors, and ridiculously many journals have accepted them.

More astonishing than those stings are when automatically created documents were accepted (and later retracted) by journals claimed to be peer-reviewed. These events make academics question whether any review was done at all. (intended stings may seem scientific at a glance, but those created with SCIgen shouldn’t make any sense to serious scientists!) Springer Nature, in its statement explaining the retractions, describes that the system “is not immune to fraud and mistakes.” And it points out that peer review is still “the best system we have so far” which therefore needs to be made better.

Criticisms are given even beyond those failure of peer review’s fundamental roles. More often than not, journals have the identities of reviewers and authors blinded* so that the reviews are given without personal interests (i.e. impartial), but still it is claimed that peer reviews embrace lots of bias. Another significant pitfall is that reviewers often show great disagreement. One of the best known AI conference had an experiment on this by separating the program committee into two, and more than half acceptances were disagreed. It is also problematic that editors’ final judgement might override reviewer recommendations.
(*an APS news denotes that “it is impossible to strip our identities out”)

These last points may be inherent to the concept of peer review itself, or necessary evil, but scholars also complained that peer reviews take too long (thus delaying the publication, or “publicizing of findings”), that reviewers are often conservative to novel ideas, and that peer review is very costly, requiring tremendous time and effort from reviewers. These are never exhaustive list of criticisms on peer review, so please refer to this and this list for more details.

Incentives to Keep the Gates

How come do these problems happen? As the theme of this series tells, it’s the bad incentives on peer reviews. I’ve discussed in one of the earlier posts, that a good incentive induces desired actions (incentive) and limits improper actions by penalizing them (disincentive). And current incentive on peer reviews lacks both.

the Best disincentive is responsibility

Not to mention that there’s no penalty system on peer reviews, speaking of how peer reviews are done “responsibly” is probably the most core aspect of peer review disincentives. To begin, considering that peer reviews are invited, voluntary works, it is obvious that there’s no penalty. How can there be disincentive when it’s done on favor without rewards? But more significant is that the current peer review system misses “responsibility”, and that’s mostly because reviewers are most of the time anonymous and the actual review records (reports, comments, etc.) are seldom disclosed.

Lack of penalty in peer review is also shown with deadlines. It is understood that genuine reviews will obviously take substantial time, but reviewers often submit past the deadlines because there’s no penalty. These behaviors by peer reviewers is never surprising, because, in addition to lack of penalty, there’s no reward despite all those time and efforts put into it.

Let Reviews Count as much as Publications

It is possibly wrong to say that peer review comes with completely no reward. Some authors may acknowledge the reviewers in the acknowledgement section of their manuscripts (though they’re anonymous. And I really can’t believe how this can be regarded a “reward”), and there are some journals and publishers who present special thanks or prizes to best reviewers (timely or quality). There’s even fewer journals who directly compensate the reviewers materially. Reviewers are given some options like cash, free subscription to the submitting journal for a short period, waiver for future fees, or donation to other researchers. The cash compensations are often a bit too small such that they seem to undervalue the significance of peer review. Free subscription options may come as of no benefit to a lot of researchers as many of them already access from their affiliations. Specifically for early career researchers, being invited for review could itself be some sort of credit as it may mean being honorarily certified as a “peer”.

Likewise, peer review doesn’t indeed come with completely no rewards. However, considering the hard work required to reviewers, these practically seem not so compensating. Most importantly, peer review is “practically” not incentivized well because the activity of reviewing isn’t as much credited for academic performance as publishing one. As noted earlier, practical incentive for researchers comes in the form of career opportunities such as jobs, promotions, tenure, or research funding. Publishing more papers had have the most part in these occasions, and now it’s time that we cherish the act of reviewing as such.

Publishing the Reviews to Count

Peer reviews are neither well compensated nor properly disincentivized. Should we be giving more rewards to good reviewers and penalties to belated and low-quality reviews? The best answer to this is opening up the peer reviews. Transparency in peer review may not directly solve all the problems, but it definitely will be the key to a better system.

The most disappointing thing about peer review is that so much of them are, as said in the title of earlier post, lost in vain. Most journals are still not publishing and sharing information on peer review, not even the metadata on them. So much potential values of peer reviews are being missed out. Opening and sharing the peer review information would not only be the key to better systems, but there are other benefits as well. One typical example is that the published reviews can be used as training materials for potential future reviewers. Open reviews can also boost civil discussions back-and-forth between authors and reviewers, and understand more contextual history of publishing process. For authors who are submitting to such journals, review records may work as a seal of how these journals encourage constructive criticisms.

Transparency would, above all, help raise responsibility in reviews. Since academia is a prestige economy where reputation and honor is so important, published review reports will encourage academics to give more genuine, responsible, high-quality reviews. This should be a way to raise the overall quality of peer reviews while not directly disincentivizing. Moreover, by publishing the review reports with the manuscripts, we may “begin to talk” about disincentivizing or giving penalties in case of irresponsible, belated, unethical, low-quality reviews. When review reports are not shared, like it is now, conflicts are resolved mostly by rebuttals from the submitting authors or intervention by editors. (funny that I should say “conflict” here, those reviews should be regarded “bad” instead of conflict, but unless publicly shared they will still remain conflict between authors and reviewers)

It’s also important for better rewarding mechanisms. The above mentioned compensations by journals and publishers are available even if the reviews were not publicized, because it’s them who give those prizes. But when it comes to actual recognition of reviewing activities as academic performances, it’s not the journals who do it but the other third party institutes like universities and funding agencies. For these academic institutes to better incentivize research endeavors such as peer reviews, the first thing to be done is openly sharing their information. Publons is a novel project in this regard (tho it’s proprietary, acquired by Clarivate in 2017). Journals like PeerJ or F1000 also have published review records, which would make it possible to incentivize peer reviews.

Moreover, opening up the peer review information is scientifically a great step forward. Like how we understand scientific methods, we don’t get so fast to the final judgements before we have the data collected and analysed. For decades there had been a lot of proposals for peer review models and methodologies, but few of them had been actually implemented in practice and most studies end up concluding that the actual implications are up to empirical testing (in real world journals, which means we never know if they don’t). A different approach is clearly having the data collected before we say something.

The proposals and suggestions often require substantial changes in publishing or editorial processes of journals and publishers. But transparency is something that they can do just right now without any big deal. For a normal, academic, peer-reviewed journal, peer review is (and must be) something that is already in practice, and the information about it is something that publishers already have. All that is required is to share these information, and leverage their potential values to build better incentive systems for peer review and academia.

There’s a phrase referring to problems of political power, “who watches the watchmen?” (latin origin: Quis custodiet ipsos custodes) Academia borrows this to say “who reviews the reviewers?” It is understood that peer review is the (not-so) silent guardian and watchful protector of the foundation of science. But who protect and guard the reviewers? A BMJ journal BJSM editorial with the same title begins with a sentence that shares exactly the same theme of this post: “Being open and transparent creates trust.” Being open and transparent about peer reviews is the best way for them to work under a better system, thus making academia more reliable and robust.

Why do we Care?

I conclude this post with some points on why it’s so important to improve the incentives on peer reviews. As already noted, peer review is the gatekeeper of science. As a gatekeeper, they maintain the quality of literature. It is by itself a good thing that the corpus of our knowledge is at a high quality, but it is more important to note that the way our knowledge evolve is cyclic. As denoted by the famous quote “Standing on the shoulders of giants”, researchers further the knowledge by consulting on the past ones. A great knowledge becomes a pathway to even more and better pieces of knowledge, while on the contrary a manipulated, bad, low quality study can lead the whole literature to become full of craps. Peer review not only does this quality control role, but it also improves the quality by contributing to the submitted manuscripts.

Another big deal about peer review incentive is that, the current academia is facing author-reviewer imbalance. A lot of journal editors are having hard time seeking reviewers. The number of submitted manuscripts is increasing exponentially, more than million every year and doubling every 9 years, and it is more and more difficult to match reviewers to them. It may be a direct approach to compensate the reviewers so to have larger pools of willing reviewers, but the ultimate goal is probably to have a better incentive structure for academia such that better balances the submission and review ratio.

Last but not least, the ever so sophisticated, contextual, in-depth, and highly intellectual aspects of science can only be evaluated by actually reading and understanding. Peer review just does it. Peer review will, at least the concept while formats could change, always remain the best way of contextual and comprehensive evaluation, something that metrics can never substitute. Despite all the criticisms and caveats, “peer review is the least worst we have.” The fact that it has many problems doesn’t necessarily mean that we need to completely abandon it, and at the same time obviously not that we should stick to the current model. We need to seek the “middle ground”, where we have better incentives for improved models. Transparency will be an elixir.

As the last point of the series, a better incentive for peer reviews has been discussed. Upcoming last post will conclude the series with some ending remarks. I always thank for your interest, and please CLAP, SHARE, and COMMENT for more discussion.

[Pluto Series] #0 — Academia, Structurally Fxxked Up
[Pluto Series] #1 — Research, the Knowledge Creating Industry
[Pluto Series] #2 — Academia, Publishing, and Scholarly Communication
[Pluto Series] #3 — Publish, but really Perish?
[Pluto Series] #4 — Publish or Perish, and Lost in Vain
[Pluto Series] #5 — On Where they Publish
[Pluto Series] #6 — On Number of Publications
[Pluto Series] #7 — On Citation Fundamentals
[Pluto Series] #8 — On Citing Practices
[Pluto Series] #9 — On Tracking Citations
[Pluto Series] #10 — On Peer Reviews
[Pluto Series] #11 — Ending the Series

Pluto Network
Homepage / Github / Facebook / Twitter / Telegram / Medium
Scinapse: Academic search engine
Email: team@pluto.network

--

--