CIHR should immediately return to international standards for peer review

CIHR has implemented several dramatic changes to its funding programs. The current competitions are being called “pilots,” implying that they are testing new systems on a limited scale before full deployment. This is not the case — the previous system has been completely eliminated and replaced with untested and controversial methods for awarding the over $500 million per year funded through the open programs. This is in itself breathtakingly reckless from an operational standpoint, described by the president of CIHR as “changing the motor of a plane while in flight,” as if that could possibly be a good idea.

One major change is the elimination of peer review panels. Panels were organised by discipline, and the chairs drew on their experience and expertise to assign grants to appropriate reviewers. Reviewers came to the face-to-face panel discussions with completed evaluations of their assigned grants, prepared both to explain and defend their assessments, as well as to attempt to reach consensus where there were disagreements.

This way of evaluating grants is the internationally accepted standard for excellence in peer review, a standard to which the new CIHR system bears no resemblance. Now, grants are assigned using a keyword algorithm, and many reviewers have been assigned grants they did not consider themselves qualified to review. Reviewer interaction is optional and online-only through what is essentially an internet chat room. Reviewers are not required to justify, explain, or defend their evaluations to the applicant, their fellow evaluators, or the “virtual chair” assigned to oversee the process. Scores are used even if the reviewer fails to participate in online discussion—even if they provide no review at all. Finally, each reviewer has a different set of grants to review, meaning there can be no group consensus derived from comparing the same set of grants.

After considering a presentation by CIHR leadership about the reforms, Ken Nakamura, the director of the Center for Scientific Review at the US National Institutes of Health, stated, “As we look to the future, we still consider face-to-face reviews the gold standard for NIH reviews.” [1] The Canadian research community considers it the gold standard as well.

It is difficult to evaluate the relative efficacy of different peer review systems. Any process — from rolling dice to the most rigorous review imaginable — results in the distribution of grant funds to eligible scientists. For that reason, perhaps the most important component of a peer review process — one that is both necessary and goes a long way toward ensuring its success — is that applicants and reviewers (who are, after all, the same people on different days) buy into it: they believe in its integrity and value. Panels generally had this trust because of their transparency (you knew who was at the table when your grant was discussed) and feedback that demonstrated your proposal had been carefully considered by experts.

Even beyond issues of community buy-in, trust, and review quality, disbursing funds is not the only function grant review serves in the scientific ecosystem. Peer review panels are sites of mentorship and community development where values and standards are communicated during the shared experience of review and face-to-face interactions. I doubt CIHR management would agree to have complex decision making occur within their organisation while never sitting down to talk in the same room.

Constructive and substantive feedback to applicants (and the opportunity to submit revised proposals in response to feedback — another feature eliminated by CIHR) is especially critical to new investigators. Around the world, scientists cite experience on peer review panels as formative in the development of their expertise and their growth into scientific leaders. In fact, prior to the reforms, CIHR also extolled the value of reviewer feedback and advised early-career researchers to pursue panel experience for the same reasons [2]. The elimination of these practices suggests that some core values have been lost at the agency.

The CIHR Act recognises the role peer review plays in developing a culture of scientific excellence. The Act does not simply require that the agency disburse funds: it requires that it employ internationally accepted standards for excellence in peer review, and that its systems for funding should serve “to attract, develop and keep excellent researchers” and “build capacity of the Canadian health research community through the development of researchers and the provision of sustained support for scientific careers in health research.” CIHR unambiguously did so from its inception until the Reforms by building a culture of rigour and excellence centred on a face-to-face peer review process and culture controlled by scientists and modelled on international best practices.

It is clear from social media discussion that participants in the recent Project Grant “pilot” that there are serious problems with the new system, and that it falls short of the standards set by the previous panel-based review system:

Day 1: https://storify.com/hwitteman/cihr-reforms-researchers-responses

Day 2: https://storify.com/hwitteman/cihr-reforms-project-scheme-discussion-day-2

Day 3: https://storify.com/hwitteman/cihr-reforms-project-scheme-discussion-day-3

Despite the best efforts of most reviewers and virtual chairs and the heroic efforts of CIHR staff to support this process, many grants did not receive adequate reviews and many more grants were discussed either very little or not at all in the online chat rooms. Scientists are human beings. Without the culture of high standards and the incentives and peer pressure that come from sitting in a room with your colleagues — all experts in your field — it is clear that review suffers to the point that applicants and conscientious reviewers do not believe that proposals are being fairly evaluated.

These reforms were designed from a narrow point of view that prized efficiency over all other considerations. “Efficiency” that eliminates core components of a scientific culture of excellence may, in the short-term, look good on a spreadsheet, but the long-term impacts are devastating. Health researchers in Canada have been warning that these issues were likely to arise since the structure of the reforms were announced several years ago. Now that their unheeded predictions are being borne out, it is time to listen to the community and return to a peer review system that supports and develops excellence and integrity in Canadian health research funding.

Is panel peer review perfect? Not by a long shot. But it is, as they say, “the worst system except for all others.” That said, if efficiency and flexibility is the principle concern and we wish to explore review processes without face-to-face meetings, there is plenty of middle ground that could retain essential features for quality review:

  • Reviewer assignment must be by experts, not algorithms
  • Mandatory synchronous discussion of grants through video conferencing
  • Virtual panels for “apples to apples” comparisons of the same grants by the same reviewers. By virtue of being virtual, they can include a much larger number of reviewers and specialization by field can be broader, more fine-grained, and flexible.
  • A strict requirement for scores to be justified with written reviews and discussion participation
  • A system that allows resubmissions and responses to reviewer feedback

Any changes should be truly piloted on a small scale and rolled out progressively and slowly. Perhaps the worst feature of the implementation was that it was clearly rushed to the point that technical and procedural issues snowballed and made a bad situation much, much worse. This is not a reflection on CIHR staff, who by all accounts have worked tirelessly to try and make this work, but on leadership that has rammed through a deeply flawed and narrowly-conceived vision of grant review that ignored well-grounded conceptual and technical objections from the research community.

We are talking about stability and trust in the entire Canadian health research enterprise. To improve or troubleshoot a system, you never change everything at once. What’s the rush?

[1] Peer Review Notes May 2015 http://public.csr.nih.gov/aboutcsr/NewsAndPublications/PeerReviewNotes/Pages/Peer-Review-Notes-May-2015Part5.aspx

[2] CIHR Guidebook For New Principal Investigators http://www.cihr-irsc.gc.ca/e/27491.html

The Art of Writing a CIHR Application http://www.cihr-irsc.gc.ca/e/45281.html