How not to choose which science is worth funding

We may as well just pick tickets out of a hat

Mark Humphries
The Spike
8 min readMay 9, 2017

--

Credit: Pixabay.com

Recently, I got a grant rejected. Nothing unusual there. I mention this as a microcosm of the scandalously bonkers system for deciding what to fund in science. And to show that there is a much better system: roll some dice.

We entered one of the three rounds held every year by the UK’s body that funds research in biology (the BBSRC); we asked for a routine amount of money, to cover three years and two researchers. Rejection was always the most likely outcome. In this particular round, there was an unusually high success rate: fully 25% of proposals were funded. (Yes, this is considered high). So we had a 75% chance of not being funded. Duly, we were not funded.

Our feedback revealed we were ranked 30th out of 94 proposals. Squirrelled away in a long sentence at the top of the feedback was the context of this ranking: how many were deemed potentially fundable, and what our overall score meant. Let me quote: “your proposal was ranked internationally-leading, and one of 92 out of the 94 that were potentially fundable.” Or let me paraphrase: we, along with many others in the same round, produced what the funding panel and reviewers agreed was internationally competitive work, but didn’t get close to funding. Worse, 98% of all submitted proposals were worth funding. Ninety-eight percent.

And how did we get to this ranking? Let’s count the effort. We took around two months to write our proposal, with two people working on it. Again, nothing unusual in that amount of time. We got some pilot analyses, tested some ideas for feasibility; wrote and rewrote. Budgeted all the things we needed. Got feedback on the proposal and rewrote again. Filled in the internal administration forms for our university. Filled in the many extra sections, on top of the science bit, that the funding councils ask for (including, but not limited to: the summaries for the reviewers and the public; justifications for each item in the budget; a summary of the expected impact of the work; a plan for how to make that impact…). All for nothing.

Why is this bonkers? Well, let’s start by agreeing on the idea that scientists at universities should be doing scientific research: finding stuff out. Now, using our efforts as a ballpark estimate, let’s tot up the total time taken on this one, solitary round of funding, time that was not spent on research.

Perhaps our’s was a bit labour-intensive, so let’s be conservative and say 6 weeks on average to write a complete proposal from scratch. Each proposal is reviewed by between 3 and 6 referees. Let’s call it 4.5 (with apologies to the referee we chopped in half). How long to review? Let’s say four hours for this kind of project grant. Then the panels: the meeting of senior scientists who decide who get’s funded and who doesn’t, based on these reviews. Rules differ between funders in UK. But roughly each proposal gets read by at least two members of the panel. They meet for two days.

(Mortgage applications are for similar amounts of money — £150,000 up to £800,000 is the typical range for this kind of three year “project” grant. Imagine if every mortgage application had to go through this extraordinary process: no one would ever move house. The UK’s housing bubble would burst. Hmmmm. Wait there while I go call Halifax…)

So if each panel member gives 1 hour to each proposal, and 2 members to each, and 94 proposals, that’s 188 hours of their time (and 2 days to meet and discuss it all). Besting that usage of time by an order of magnitude are the reviewers: 4.5 x 94 = 423 reviewers, taking a total of 423 x 4 = 1692 hours to review all the proposals. Finally, dwarfing all this, the researcher’s time to produce all these proposals to review is 94 x 6 = 564 weeks. Just for this specific round of funding. Which happens three times a year.

With just a 25% chance of funding, we need to submit four proposals to get one success (on average). And because you’re not allowed to submit the same proposal again, that means doing a new one. For us, that’s eight months of work to get one success. Eight months of not doing research. Add in the rest of our unsuccessful colleagues from just the same round of funding, and we get: 580 months for each of the 94 of us to get one success.

Or 46.7 years of time. For one round, of one funding body, in one country.

Most funding competitions have much lower chances of success than 25%. Some UK based competitions are below 10%. As are some NIH ones in the US. As are some Dutch ones. As are some EU-wide ones.

You’d think this would mean that many researchers have to spend practically every spare moment of their lives writing applications to get funding, just to be sure of getting one success. And you’d be right.

This is an absurd way to run a global business — government-backed research — with an annual turnover in the hundreds of billions. A business that is largely funded by the taxpayer.

It is absurd because the time spent writing proposals for research to be done in the future takes up more time that actually doing the research right now. I don’t know about you, but personally I’d rather my tax got spent on doing research, not on writing applications.

It is absurd because UK universities, and others world-wide, use making grant applications and getting grant income as evidence that you are a “good” researcher. Yet the chances of getting funding are poor, even for the world’s very best scientists.

It is absurd because people are woefully inconsistent at rating quality. If you give a group of experts a set of proposals, they will rank them in a certain order. Give a different group of experts the same proposals and they will rank them in a different order.

(Here’s one example: the NIPS Consistency Experiment. NIPS is the primary machine-learning conference, attracting 2000+ attendees. It is ferociously difficult to get a paper accepted: NIP’s target rate is for around 20% of submissions getting accepted. Wondering how well they were judging the quality of what they were accepting, in 2014 the organisers ran an experiment in reviewing. They had 10% of the submissions (166 papers) reviewed by two, independent committees, and then compared the lists of the accepted papers between the two. The inconsistency was staggering: the two committees disagreed on 57% of the accepted papers. To put that another way: more than half the papers accepted by one committee were not accepted by the other, and vice versa. Getting selected was, in essence, random).

This wild inconsistency means that our ranking is meaningless. So being ranked 30th in one list means we could be ranked 15th in another (yay, money!), and 60th in another (boo, definitely not money). Driving the ranking of all but the outlandishly brilliant and the obviously bag-of-shite proposals is a myriad of small things. Could be the details of the project are not quite clear. Could be the goal is seen as too ambitious. Could be it relies on an untested idea (as noted often, this tends to push science down conservative routes). Could be it uses a currently fashionable technology. Could be the panel member just doesn’t like you.

Yet, just as my experience (again) confirmed, and as the statistics gathered by the UK’s research funding bodies show over and over again, the overwhelming majority of proposals are fundable, are laden with potential to lead to interesting science, useful insights, baby steps on the roads to new treatments and cures. If there was the money, we could fund so many wonderful ideas. But we cannot.

Instead we waste scandalous amounts of scientists' time on asking for money to do science.

Money they know they are not going to get, but have to ask for anyway, so they don’t get fired.

Scientist taking a more creative approach to funding their research. Credit: Pixabay

What’s the solution? If we’re bad at consistently ranking quality, the ranking is largely arbitrary, and practically everything is potentially great?

We should choose what to fund at random.

A recent editorial in mBio largely came to the same conclusion. And, as that editorial also outlined, this means two practical things. First, we need peer review only to weed out the crap and the batshit insane. Second, because we don’t need so much arbitrary detail on which to split hairs at a panel meeting, we can dramatically simplify what has to go into a proposal for funding. We need write only what our scientific goal is, what are plans are to get there, and roughly how we’d apportion the money.

We get back the time to do science. We don’t need to spend two months of effort, on a proposal of over 10,000 words, to make the choice. We don’t need to spend many decades of research time just to get one 3 year grant for 100 labs.

We don’t need indepth reviews of every tiny aspect of the proposal; we just need to know: is the science even remotely feasible? And have the proposers got a clue?

There’s another reason: randomness keeps science healthy. New ideas and projects would get funded, providing variation in approaches and insights. It would mean no bubbles, no herd behaviour, no chasing the latest shiny baubles.

(And note: this does not mean the removal of long-term funding. The grants I’m talking about are judged on future potential. And future potential is so hard to predict that why bother? But long-term funding is usually more heavily judged on track records: to fund or re-fund a centre or institute based on the performance of the people who work there; to give or extend a fellowship or investigator award based on the performance of the person in question.)

Doesn’t this risk all funding going to a lucky few? That some labs will be pulled out of the hat many times, by chance, and hoard the wealth? No: we simply cap the number of grants and spread the wealth. As I write, the NIH have announced they plan to bring in just this cap. They plan to use a point-based system to stop a few labs from hoarding all the money; they plan to try breaking the Matthew Effect, stopping the rich-get-richer, poor-get-poorer inequality that pervades science as much as it does society at large. And now that even the funding behemoth of the NIH has recognised the way science is funded is broken, why not take one small step further and ditch the arbitrary rankings?

Because ranking does not matter. Either I get a grant or I don’t. That we were one or one hundred places below the cut-off does not make any difference. As grant funding is already a lottery, why don’t we make it one?

Then scientists can do the science they’re paid for.

Twitter: @markdhumphries

--

--

Mark Humphries
The Spike

Theorist & neuroscientist. Writing at the intersection of neurons, data science, and AI. Author of “The Spike: An Epic Journey Through the Brain in 2.1 Seconds”