That’s another fine mess you’ve got us into, peer review

A case study in the faultlines of peer review

Inside a reviewers head. Credit: John Hain (via Pixabay)

Peer review is how we judge others’ science. But it is inherently subjective. And scientists hate subjective. This leads into all sorts of sticky situations. In my (neuroscience) review of 2016, I wrote about an outburst of anger caused by a rejection at PLoS Computational Biology after 10 months and 4 rounds of review, a rejection for not being “of sufficient interest”. Now the editorial team have given me the low-down on their side of the fracas. And the mess shows how we all – authors, referees, and editors – are placed under needless pressure by the systems of modern science.

Why such a mess? The problem of subjectivity is particularly acute at journals that are choosy about what they publish. For them part of that peer review – by both editors and referees – is to decide whether a paper is “good enough” for that journal. Whether it is both good quality science and that science is a big enough advance over existing work to deserve the spotlight that will be thrown on it by selection for that journal. This is unavoidably subjective. And the mismatch between what the authors, the editors, and the referees feel is “good enough” leads to an undercurrent of mistrust and suspicion in science.

So, the paper in question. Let’s start with the few facts. The authors had commendably posted it to bioRxiv, so anyone can read it. It's a technical, computational neuroscience paper on how spikes are initiated. PLoS Computational Biology is a selective journal. The selection of the paper thus hinged on: was it sufficiently interesting and novel to be published in that journal?

What you’d like is a consensus on this question between the referees. And that’s where this paper ran into trouble. (The following is my account from conversations with the PLoS editorial team and the paper’s senior author; I have not seen the reviews or editorial correspondence, as these are rightly confidential. The telling of this tale is to show what a mess peer review can get us in, not to blame).

In the first round of review, the paper was seen by two referees. One wrote a supportive review. One thought the work had clear merit, but added a critical indication that they did not feel the paper was worthy of PLoS Computational Biology, because it was not of sufficiently broad interest. (Crucially, it seems this was not clear to the authors from the comments they received; but was clear – or clearer – to the editors).

Dang, no agreement. Still, not a rejection, and looking good for another round of revisions. A revised version was duly delivered.

But then it turned out that the supportive referee had recently begun a collaboration with the paper’s authors, and not declared it at any stage. A clear conflict of interest. The editors rebuked this referee, and struck the review.

This left one usable review, saying “no” to the crucial question of “is it important enough for the journal”. What to do? Reject or continue?

The editors decided to continue, and found a third referee. (It’s worth noting at this point that finding referees for papers can be an absolute pain, especially for specialist, technical papers. Many potential referees are likely to be asked before someone says yes).

So now there were two referees looking at the revised version. The new referee wrote a supportive review. The remaining original referee wrote a short review but again indicated the fundamental problem in their view – that the paper was not enough of an advance.

Dang, still no agreement. And guess what? The new reviewer? On their review, they declared they were in a collaboration with the paper’s authors. The editors then moved to strike the review, but the reviewer and authors protested: this “collaboration” was the reviewer sending published data to the authors for a different project [edited since the original posting for clarity]. Is that sufficient for a conflict of interest? The review stood.

But what about the crucial question of “is this important enough for the journal?’ This left the constant reviewer saying “really, still no”. And another reviewer saying “yes”; but with a potential issue. What to do? Reject or continue?

The editors decided to continue. As this was turning into a complicated case, the senior editor got directly involved; that editor, knowledgeable about the field of single neuron dynamics, reviewed the paper. Having reputedly gone over the authors’ previous papers in PLoS Computational Biology, they felt that the current paper was not sufficiently advanced over that work to be worthy of publication. In addition, they had a long list of technical issues. The authors were then told this.

The authors, angry at the convoluted and contradictory messages they were seeing, wrote a long reply letter addressing the issues, both technical and subjective. They made no meaningful changes to a paper they considered finished, because it was agreed with the editors that the decision would be made on the letter.

The response: sorry, but we have to reject it – it is not a sufficiently important advance over the authors’ previous work, previously published in PLoS Computational Biology, to be published in the same journal.

Cue understandable outrage from authors. So much time wasted, supportive reviews ignored (there was no suggestion that the conflicted reviewers deliberately gamed their reviews in favour of the paper), and a decision hinging not on technical arguments but the subjective measurement of novelty. With the crucial tipping of the scales coming from an editor-reviewer who only entered the fray after three rounds of revision.

And cue regret from the editors about how it had got to this point, where each of a series of understandable individual decisions had led them to having to reject a paper after 4 rounds and 10 months. Where reviewers with potential conflicts of interest had muddied the waters; where a senior editor trying to solve the problem by adding a nominally unconflicted review instead angered the authors, by tipping the scales of the review process towards “reject” at a very late stage.

[UPDATE: 14:20, 10/1/2017. The senior author has now given their perspective here, publishing their appeal letters to the Editor in Chief, and the response — with permission].


Another fine mess. There is nothing unique in this tale to this journal. Review processes can get out of hand at any journal that practices some form of selection based on merit. And when they don’t practice merit-based selection, they can end up publishing absolute garbage (number #2 in that linked article).

There are many possible solutions. One is to do away with selective journals completely. They create an artificial scarcity – particularly now that, except in a few cases, journals are not physically printed, so the length and number of papers they carry is not an issue. But what to replace them with? The mess with the PLoS Computational Biology paper has motivated some careful thinking on the options by the senior author.

One option is post-publication peer review. By first publishing the paper, then revising it following publicly-published reviews, a lot of the inherent problems with peer review go away. The paper is published, so no long, potentially career-wrecking delays. The reviews help the authors, and let the reader make their own decision about the work. The editors make the process happen, but no longer stand between the authors and publication, removing the adversarial parts of peer review.

Such options move the subjective from the editors and referees to the reader. But this places a huge burden on the expertise of the reader. Consequently there is also a strong argument for having outlets for work that are specifically selective.

The music industry is a useful analogy. Pre-internet, a handful of record labels dominated, selecting artists, releasing their music. You couldn’t get more subjective: you heard what they chose.

When the Internet went mainstream it was widely predicted that this would democratise music, by removing this highly selective process. The Internet would free music from the tyranny of the record labels. It would democratise music, letting anyone put their music out there to be discovered by the listeners. The problem is, as anyone who’s been in a local music scene can attest, most music is rubbish. Or just not very interesting. Turns out the Internet is full of rubbish music – MySpace was choked to death on the stuff. Turns out that labels were doing a grand job selecting music for us to listen to. Because they had the resources and reach to get the big picture and know what was good and what was not.

Similarly, in an academic world drowning in published papers, highly selective journals have two advantages over individual scientists. They can see the big picture of the field, see what is and isn’t a big thing. And they can weed out the definite rubbish. They are the A&R people of publishing. We read what they choose, because they have a higher hit rate of choosing good, interesting stuff (but nobody’s perfect).

No new solutions are definitively, demonstrably better than each other or the status quo. Some are worth trying. But all such things are, after all, sadly, subjective.

If you liked this, please click the ♥ below so other people can read about it on Medium

[My thanks to the senior author and to the PLoS Computational Biology editorial team for their candid discussions of this mess, created by the strictures of peer review. There are no names here, because this is about the process, not the people.].