What might peer review look like in 2030?

The new report What might peer review look like in 2030 is based on sessions at the SpotOn 2016 conference held in London. Recommendations are made to the research community, and a range of perspectives are presented.

Recommendations to the research community

The report’s recommendations for the research community as we head towards 2030 are:

  • Find and invent new ways of identifying, verifying and inviting peer reviewers, focusing on closely matching expertise with the research being reviewed to increase uptake. Artificial intelligence could be a valuable tool in this.
  • Encourage more diversity in the reviewer pool (including early career researchers, researchers from different regions, and women). Publishers in particular could raise awareness and investigate new ways of sourcing female peer reviewers.
  • Experiment with different and new models of peer review, particularly those that increase transparency.
  • Invest in reviewer training programs to make sure that the next generation of reviewers are equipped to provide valuable feedback within recognized guidelines.
  • Work towards cross-publisher solutions that improve efficiency and benefit all stakeholders. Portable peer review has not taken off at any scale, but could make the publishing process more efficient for all involved.
  • That funders, institutions and publishers must work together to identify ways to recognize reviewers and acknowledge their work.
  • Use technology to support and enhance the peer review process, including finding automated ways to identify inconsistencies that are difficult for reviewers to spot.

Perspectives

Artificial intelligence applications in scientific publishing
 Chadwick C. DeVoss, Founder and President, StatReviewer

  • Early AI technologies are already being used to address certain issues, for example identifying new peer reviewers, fighting plagiarism, bad reporting, bad statistics, and data fabrication.
  • Full automation of the publishing process will be possible in the future, but the potential for this raises a quandary. Automating a process that determines what is valued as “good science” brings risks and ethical dilemmas, but on the other hand, it would expedite scientific communication. But if science becomes more open, then the ethics of full automation become less problematic because the publishing process no longer determines scientific importance.
  • The rise of predatory journals shows that there is not enough capacity to process the amount of scientific writing that is being generated. AI will help through increasing overall capacity (finding new reviewers, creating automated reviews etc.) and through automated retrospective reviews of standards compliance.
  • We have to be wary of the point at which an unsupervised AI determines the direction of scientific research, because true discovery should be an entirely human idea.

Peer review, from the perspective of a frustrated scientist
 Elodie Chabrol, Postdoctoral Research Associate in Neuroscience, University College London

  • The scientific publishing process is subject to numerous flaws. Ones that stand out in discussions at SpotOn 2016 are single-blinded peer review, a lack of incentives for peer reviewers, and early career researchers who may make great reviewers not getting invited to review.
  • In single-blinded peer review, reviewers are hidden from the authors but the reviewers know who the authors are, so some reviewers may see the authors as competition and review a paper more harshly.
  • New initiatives such as Authorea, Overleaf, and PaperHive are improving peer review and publishing, and facilitating collaboration between researchers.
  • Double-blinded peer review introduced as standard so authors wouldn’t feel they were being judged based on their boss’s, colleague’s, or own name but rather on the work they present.

The history of peer review, and looking forward to preprints in biomedicine
 Frank Norman, Information Services Manager, Crick Institute

  • Peer review has only relatively recently become widely adopted in scientific publishing. It should not be treated as a sacred cow, but rather as the currently dominant practice in a long and varied history of reviewing practices.
  • Open access publishing has challenged peer review, with the idea that dissemination of research is at least as important as validation of research.
  • A further challenge to peer review is likely to coming from preprints, which are scientific manuscripts uploaded by authors to an open access, public server before formal peer review. Preprints have been widely adopted by physicists through the ArXiv server, but publishing practices and sharing cultures vary greatly between different research fields. Most preprints are also submitted to journals and subsequently peer-reviewed and published, but some are not.
  • A new set of research behaviors will emerge around reading, interpreting and responding to preprint literature. We may be moving to a world where some research is just published ‘as is’, and subject to post-publication peer review, while other research goes through a more rigorous form of review including reproducibility checks.
  • New artificial intelligence (AI) tools such as Meta and Yewno will help by providing new ways to discover and filter the literature.

The sustainability of peer review
 Alicia Newton, Senior Editor, Nature Research

  • The amount of science, technology, engineering, and mathematics (STEM) literature is growing massively. Peer reviewers spend between four and six hours reviewing a paper, so if a manuscript is seen by two reviewers, then between 13 and 20 billion person-hours may have been spent on peer review in 2015.
  • It is therefore clear that reviewing papers is a large burden on researchers’ time. However, efforts are not equally distributed. There are geographic and gender biases.
  • The amount of scientific literature published is growing, so if peer review is going to be sustainable, editors and publishers will have to find ways to reach new reviewers. Varying gender, location and career stage would help editors and encourage researchers to actively seek out literature that may be missing from their standard citation lists.
  • Another potential avenue is training editors on unconscious bias and setting targets for reviewer diversity. Editors need to be able to know and trust reviewer expertise and knowledge, so better ways of tracking people who are already reviewing for other journals could help match editors to experienced reviewers.
  • For those who have never reviewed before, a reviewer training program could be helpful, especially if it ended in well-recognized accreditation.

Improving integrity through openness
 Elizabeth Moylan, Senior Editor (Research Integrity), BioMed Central

  • Peer review can be slow, biased and open to abuse. New and more challenging situations range from ethical lapses involving individual manuscripts to large-scale manipulations and fraud.
  • We are witnessing a rise in misconduct on an industrial scale fuelled by an increasing pressure on authors to publish. As a result, we are likely to see a rise in publishers using technology to verify that researchers are who they say they are, that manuscripts are not plagiarized, and that figures and results are free of inconsistencies.
  • Initiatives are already improving the integrity of the peer review process through transparency, for example, by naming the handling editor and/or peer reviewers, sharing the content of the reviewers’ reports (transparent peer review), or sharing content and reviewer names (open peer review). However, reviewers are less willing to undertake open peer review (particularly early career researchers) and it is not uniformly embraced across all subjects.
  • Other ways in which to open up the peer review process include reviewers commenting on each other’s reports (collaborative peer review or cross-reviewer commenting), reviewers and authors exchanging ideas (interactive peer review), and reviewers choosing what manuscripts they review (open participation).
  • Published articles do not remain unchanged forever, and we need a system that facilitates post-publication changes. “Living” articles where sharing what researchers are going to do (pre-registration) and how they do it (data) could radically reshape the publishing landscape.

Formal recognition for peer review will propel research forward
 Andrew Preston, CEO and Co-Founder, Publons
 Tom Culley, Marketing Director, Publons

  • Disturbing problems in scientific publishing include a reproducibility crisis, significant delays in publishing and disseminating peer reviewed findings, a surge in retractions, and admissions of fraudulent or questionable research practices. This is leading to increasing skepticism in regard to the quality and integrity of research. However, in the era of fake news and distrust in reporting, evidence-based decisions are arguably more important than ever.
  • Thorough peer review can mitigate most of the issues in research, and in fact improve the quality of research papers before they are published, but the system is overburdened and under-developed.
  • Research is overrun by a debilitating ‘publish or perish’ culture that almost exclusively rewards experts for publishing at nearly any cost. The same system then offers no formal incentives to the same experts relied upon to flter out the false, fraudulent or misleading submissions through peer review.
  • Simple steps can be taken to bring balance back to the system. A starting point is to formally recognize and reward peer review efforts so it stands a chance against the disproportionate rewards for publishing.
  • Academic publishers and journals have started to do this by publishing the names of reviewers annually and providing certificates to reviewers. This is better than nothing, but has shortcomings. If certificates are not given any credence when evaluating performance or allocating funds, they’re no more than wall decorations. The fragmented and imprecise nature of recognition does not provide a basis for incorporating peer review contributions into evaluation methodologies, as there is no way to benchmark the peer review outputs. Altruism and a thank you is a tough bet when pitted against the very tangible rewards for publications and citations.
  • Another approach is for publishers to start paying reviewers or offer in-kind benefits, but there are a number of points to keep in mind in regard to this solution. Paying referees without measures to control for the quality of reviews could lead to bad peer review, defeating the purpose of the intervention. A survey suggests that researchers prefer formal recognition to cash or in-kind payment for reviewing. Researchers value career advancement and institutions (not publishers) make career advancement decisions.
  • Institutions, world ranking bodies and funders have the financial or decision-making power to influence the actions of researchers. Institutions could give greater weight to peer review contributions in funding distribution and career advancement decisions. If funders factored in peer review contributions and performance when determining funding recipients, then institutions and individuals would have greater reason to contribute to the peer review process. If world ranking bodies gave proportionate weighting to the peer review contributions and performance of institutions, then institutions would have greater reason to reward the individuals tasked with peer reviewing. if institutions and funders make it clear that peer review is a pathway to progression, tenure and funding, researchers will make reviewing a priority. For peer review to be formally acknowledged, benchmarks are necessary. New data tools can assist.

Originally published at RealKM.

One clap, two clap, three clap, forty?

By clapping more or less, you can signal to us which stories really stand out.