More and more, academia is moving to an incentives model. Researchers are increasingly asked to apply for grants to fund their research. Project proposals that are judged to be “excellent”, “novel”, and so on are given lucrative sums, while the other project proposals get nothing at all. The theory of the incentives model, even if it is often unstated, is that these big prizes will encourage better academic research overall, as researchers compete for them.
Does the incentives model work? No.
There are two main reasons. First, even when a project’s merit is measurable, it can be difficult to accurately discern its merit ahead of time. Second, small money toward many projects can spur more research overall than big money toward few projects.
In fact, Daniel Lakens already wrote an excellent blog post about this in 2013, but since the system is not slowing down, I thought I’d gather the available evidence in one place so I can easily refer back to it. [Last update: 2017OCT17.]
Danielle L. Herbert, Adrian G. Barnett, and Nicholas Graves (2013), “Funding: Australia’s grant system wastes time”, Nature 495.
We found that scientists in Australia spent more than five centuries’ worth of time preparing research-grant proposals for consideration by the largest funding scheme of 2012. Because just 20.5% of these applications were successful, the equivalent of some four centuries of effort returned no immediate benefit to researchers and wasted valuable research time. The system needs reforming and alternative funding processes should be investigated.
Jean-Michel Fortin and David J. Currie (2013). “Big Science vs. Little Science: How Scientific Impact Scales with Funding”, PLoS One.
Impact was generally a decelerating function of funding. Impact per dollar was therefore lower for large grant-holders. This is inconsistent with the hypothesis that larger grants lead to larger discoveries. Further, the impact of researchers who received increases in funding did not predictably increase. We conclude that scientific impact (as reflected by publications) is only weakly limited by funding. We suggest that funding strategies that target diversity, rather than “excellence”, are likely to prove to be more productive.
Quirin Schiermeier & Richard Van Noorden (2015), “Germany claims success for elite universities drive”, Nature 525.
But the Excellence Initiative may not be separating the elites from the rest when it comes to the quality of research papers. Nature’s analysis shows that almost one-quarter of articles from Germany’s elites are now in the world’s top 10% by citations — up from one-sixth 12 years ago. Yet it also shows that some other German universities that received much less funding, or no top-up funds, have matched this rise.
Krist Vaesen & Joel Katzav (2017), “How much would each researcher receive if competitive government research funding were distributed equally among researchers?”, PLoS ONE 12(9): e0183967.
Despite its numerous benefits, such egalitarian sharing faces the objection, among others, that it would lead to an unacceptable dilution of resources. The aim of the present paper is to assess this particular objection. […] According to our results, researchers could, on average, maintain current PhD student and Postdoc employment levels, and still have at their disposal a moderate (the U.K.) to considerable (the Netherlands, U.S.) budget for travel and equipment. This suggests that the worry that egalitarian sharing leads to unacceptable dilution of resources is unjustified.
Brian Burgoon & Marieke de Goede & Marlies Glasius & Eric Schliesser (2017), “Too Big To Innovate?”, The Dutch National Research Agenda in Perspective. (See also Rogier De Langhe & Eric Schliesser (2017), “Evaluating Philosophy as Exploratory Research”, Metaphilosophy 48(3): 227–244.)
We challenge, in particular, the existing bias toward identifying and awarding scholarly niches and national champions with large grants to ever tinier shares of the submitted proposals. We argue that this is wasteful spending and, when scrutinized, based on unrealistic assumptions about the nature of scientific research and the composition of the scientific community. […] The result is that the existing system of funding may have the perverse, if unintended, effect of discouraging originality and innovation.
Shahar Avin, “Science funding is a gamble so let’s give out money by lottery”, Aeon.
Science is expensive, and since we can’t fund every scientist, we need some way of deciding whose research deserves a chance. So, how do we pick? At the moment, expert reviewers spend a lot of time allocating grant money by trying to identify the best work. But the truth is that they’re not very good at it, and that the process is a huge waste of time. It would be better to do away with the search for excellence, and to fund science by lottery.
Feel free to send me other empirical or theoretical research that is pertinent!
For references on arguments against meritocracy in academia more generally, consult Liam Kofi Bright’s #NoHeroes Reading List. Its themes are: (1) we are not good at detecting and rewarding merit, and (2) even if we could detect and reward merit better institutions would not focus on this. For discussion of how the myth of meritocracy produces precarious labor in academia, read Robin Zheng’s “Precarity is a Feminist Issue” (2018).