Peer Review 2016 and the three ‘R’s’ — rapidity, reward and recognition

As Peer Review Week 2016 — now in its second year — draws to a close, once again we’ve seen this important yet often under-valued academic publishing practice put under the microscope by those involved in the scholarly publishing ecosystem.

While its importance is unquestionable, many in the industry frequently criticise the process as lacking in transparency, being unnecessarily long and laborious, and often responsible for holding up the publishing process by months. More papers are reviewed than published and the system groans under the sheer weight of an increasing number of research papers, which are essentially queuing for publication across all academic disciplines.

Academics generally wish to be rigorously reviewed by their peers and believe this process to be critical when it comes to publishing their work, but they also tend to agree that the system could do with improvement. Although the majority of the criticism revolves around transparency and the length of the process, many in the field are also concerned by the quality of reviews and call for training on how to provide appropriate feedback. According to an article by Elaine Devine in The Guardian many researchers “would like the option to attend a workshop or formal training on peer reviewing”.

Ideas to improve the process are numerous, but the popularity of peer review lies in its tradition and respectability. So how can something so conventional, institutionalised and deeply entrenched within the community succumb to new innovations and business models? And which of these models are the most noteworthy and likely to take off?

The need for speed

Offering a potential solution to the length of the process and lack of transparency is F1000, a post-publication peer review service which identifies and recommends published research in the fields of biomedical sciences. A faculty of over 5,000 members, including scientists and clinical researchers, reviews and recommends scholarly publications while every recommendation is carefully checked by Section Heads.

To avoid possible bias, faculty members are peer-nominated, prohibited from reviewing articles on which they are listed as an author and every recommendation is attributed to them with a link to their profile to ensure transparency. Since every recommendation is vetted by fellow faculty members, they have the ability to voice criticism if they believe a recommended article should not be included in the F1000 library. Since F1000 predominantly focuses on reviewing research post publication, it eradicates the bottlenecks generated by traditional peer review prior to academic work being published. The F1000 model seems to be the answer to many of the prevalent issues, but it remains to be seen whether the academic community will take to publishing pre review and accept such dramatic shake-up of its traditional peer review model.

Peer Review’s Robin Hood?

To encourage a fairer, more inclusive, and potentially speedier, review process, Veruscript, a new journal publisher we featured in our post Fairness and Equality — the next hurdle for Open Access, promises a process which benefits the author, publisher and reviewer. To publish in one of its Open Access journals, the ‘Article Processing Charge’ (APC) which authors pay is offered to reviewers as a reward, in the form of a cash payment of £100, a donation to a special fund or credit against future papers they may publish themselves.

While Veruscript will certainly encourage more reviewers to participate in the peer review process through an attractive reward scheme, it does not however appear to offer a solution regarding transparency or qualified reviewers. While reviewers are vetted by editors, they may be recommended by authors, and both authors and reviewers remain anonymous. This does eradicate possible bias but does not offer more transparency.

Credit where credit is due

Another common gripe the scholarly publishing community has with regards to traditional closed peer review is the lack of recognition given to reviewers who participate in the process. In a world which thrives on kudos, prestige and reputation, it seems almost aberrant that reviewers are not publically commended for their contributions. To this end, Elsevier are currently piloting several concepts aimed at recognising reviewers and rewarding them for their hard work. These include an open peer review model, where reviewers (named or anonymous) have their contributions and recommendations published alongside research, and Cross Review, a collaborative peer review process where reviewers can partake in a closed forum and have more involvement in the decision making processes.

Last but not least, launched last year, the publisher’s Reviewer Recognition Platform is a particularly significant development. The platform contains profile pages for reviewers across many of the publisher’s journals, allowing them to track their status, access their reviewing history and volunteer to review for other journals. They can also claim discounts for books and collect ‘certificates of recognition’. The site now has over 50,000 profile pages for peer reviewers.

Peer review is such a crucial part of academic publishing when it comes to ensuring quality and accurate scholarly works. The system isn’t broken, but it could do with some invigoration and fresh innovation to benefit everyone involved. Some of the new developments and models in this space are exciting and tackle the challenges posed by traditional peer review head on. I think during the next five years we will witness many of these pioneering concepts being adopted throughout the community and ecosystem, and I look forward to many future Peer Review Weeks and taking stock of this evolution.

By Byron Russell