What a Difference a Year Makes…..
On July 15th, 2016 around 60 Canadian health researchers convened in Ottawa at the behest of the Minister of Health, Jane Philpott, to hash out a solution to a broad loss of confidence in how the Canadian Institutes of Health Research was adjudicating grant applications. At stake was the “integrity and quality” of the so-called CIHR “reforms”. The severity of the issues for the first “Project Scheme” were, if anything, underestimated at the time. The results were published a few days after the meeting and represented an unprecedented nadir in success rates and, more importantly, an egregious loss of quality control of adjudication. Perhaps because of the latter, there has been no published analysis of the competition since (appropriated tagged as #PScream by @LisaPorter2 on Twitter).
A Working Group was then assembled and worked hard over the Summer to devise improvements before the next Project Scheme competition in Fall of 2016. In many ways they succeeded as the quality of reviewing did substantially increase. However, their two-stage approach nearly doubled the amount of effort of reviewers. This wasn't the fault of the Working Group. They’d laboured under many restrictions including the need to come up with a plan in time for implementation and were also told, in no uncertain terms by CIHR, that a return to the gold standard of face-to-face reviewing was impossible. The machinery had been dismantled, the expertise lost and the future was virtual review. Indeed, leadership at CIHR persisted in the experiment and commissioned a panel of international experts to come to Ottawa in January (!) to ruminate on how peer review could be optimized. This panel, despite receiving multiple representations of the chaos engendered by the virtual review process, notheless recommended that CIHR largely stay the course.
The Panel was unanimous in concluding that a world-class system could be evolved from the important and innovative design principles that lay at the heart of the redesign that was attempted by CIHR. It is desirable to continue to follow this general path not only because of its rationale but because it can be incrementally developed and implemented progressively without great disruption. We would not recommend further reversion to the pre-2012 process that had real and perceived limitations.
The panels report was published in Feburary 2017. By the beginning of April, the President and Vice President had left the organization and an interim President and VP had been appointed. Meanwhile, the second Project scheme was in motion using the Working Group’s hybrid approach where virtual review was used to eliminate 60% of grant applications followed by a second round of face-to-face reviews of the remainder (the applications were arranged in 35 or so “clusters”, an unfortunate term as it turned out). The cost of transporting this subset of reviewers, some of whom had only seen one or two of the original applications, was barely different from the old face-to-face panels yet saving money on panels was part of the rationale for the reforms. Moreover, in it’s intent to reduce possible conflicts and expand the reviewer base (since 4 reviewers were now required instead of 2–3 per grant in the pre-reform system), more international researchers were involved, with greater travel needs and expenses.
Anticipating the projected success rates for a competition that was behind schedule were likely in single digits, the interim leadership decided to bite the bullet on May 5th and announced a 10 week delay to the next competition. This had two effects. Firstly, it bumped the funding needs for the next competition to the next fiscal year releasing more funds for the competition in process. Secondly, it meant that instead of two open competitions per year, as planned, the Projects were, in essence annualised. CIHR has expressed the intent to restore bi-annual competitions but this was a setback. The bold action effectively doubled the funds available: as a consequence 475 grants were awarded along with 121 bridge grants (from 2885 applications), equalling a 16.5% success rate (not including bridge grants).
And today (July 10, 2017) the other shoe dropped with a message that CIHR was abandoning virtual review of Project grants and returning to a panel-focussed process. The memorandum was clear: “grant applications will be reviewed by face-to-face panels, with no online or other prior evaluations”. These changes were developed over a short period of intense consultation and input from various stake-holders. While there will undoubtedly be more changes and refinements (and there will apparently still be another Foundation competition this Fall), this major change buries the spectre of virtual review at CIHR.
Why did it fail? There are multiple reasons including lack of real world testing, erroneous assumptions, lack of understanding, overestimation of the integrity and effort of reviewers if relieved of peer pressure, lack of anticipated funding, idealism, ill-placed optimism, lack of real consultation and response to criticism, etc., etc. Virtual review can certainly work (it is struggling along in a slightly different form in the Foundation scheme) but it was utterly irresponsible of a federal government agency to experiment with over $1 billion of government/tax payer money and the careers of thousands of health researchers. Distribution of funding based on quality of applications received is the primary job of a funding agency. It failed miserably. This mess will take years to recover from and some researchers have undoubtedly closed up shop directly because of this debacle. Yet others (largely invisible) will have decided this is not an option for them and switched to other professions.
How might such train wrecks be avoided in future? Firstly, we have learned a lot and so has CIHR. There is a real cost to delivery of effective evaluation of research grant applications. But this is an essential and desirable cost. The expense of high quality adjudication should never be substituted for funding an extra couple of grants as it undermines confidence in them all. Secondly, the governance structure of CIHR enabled undue influence and information control by a small number of people. This flaw was recognized by the Naylor panel on Fundamental Research. Hopefully it is soon corrected. Thirdly, effective peer review involves some level of conflict of interest. But this can be managed. Excluding conflicts excludes some of the more appropriate reviewers. Forthly (related), reviewers are human and will minimize whatever extra work they must do. They are held accountable only by their peers who keep them honest and diligent. No one wants to look like a lazy, biased pig to someone they know, let alone a room full, but anonymity can enable the worst in people. Fifthly, there was consultation but it was largely lipservice with minor changes to plan. Contrast that with the open-eared road trips of the interim leadership. If agendas are set in stone, they better be word-perfect.
Lastly, in my opinion, the leadership at CIHR clearly misled others in pursuit of its single-minded agenda. The mantra seemed to be “the end justifies the means”. People within CIHR, including several Scientific Directors, were told to stick rigidly to the party line. Technical concerns raised by experts within the agency were often ignored. They worked hard to make a badly flawed system work. Some of those that left the agency (because they saw the calamity ahead?) still tried to right the ship. There are many unknown heroes who tried to mitigate the train wreck — who knows how much worse it could have been? We owe them a great deal. As do we owe the interim leadership that has done a difficult job remarkably well. I’m told that on the first day of the job, the interim President spent the day shaking hands with each of the CIHR staff. Thank you to all of you!
And once CIHR is sorted, please consider doing something about the Common CV next!
