Looking for research income?

Peter Howley
9 min readJun 22, 2020

How can Departments boost research income? How can individuals increase their chances of success? How much time and effort should individuals spend pursuing research income?

Most people would agree that success or otherwise can be a crapshoot but there is likely to be more disagreement around how much luck is actually involved. Those of us who submit papers for peer review recognise that the success or otherwise at a particular journal depends to varying degrees on both the quality of the paper and sheer luck. I suspect that while the quality (however defined), is important there is even more luck, or perhaps randomness is a better term involved when it comes to research grants.

Success often involves obtaining positive reviews from 2–3 reviewers and we know from various studies that to put it mildly the inter-rater reliability (degree of consistency) between any two reviews is limited but also a wider panel. It also seems likely that journal editors will have greater experience in matching journal submissions to reviewers with relevant subject expertise than research councils.

How can Departments boost research income?

In answering this, let’s first consider what Departments currently do. Internal peer review of proposals is perhaps the most common mechanism that Universities employ. The rationale behind this approach is that going through peer review will enhance the probability of success of each individual application and ultimately the share of the overall funding pie coming to that University. The form of internal peer review varies greatly from mentoring or coaching of applications from junior colleagues, to more formal peer review by Departmental colleagues to even formal University wide panels with the potential for demand management systems.

Perhaps a good rule of thumb is that the more rigorous the internal scrutiny process the greater the likelihood of success. A more pertinent and difficult to answer question is exactly how beneficial any particular approach is. This is important given that any review process will involve extra costs for submitters (with the potential for disincentive effects) and of course for others involved in the internal peer review process itself.

As a way to look into this issue, I examined the overall percentage success rates of all Universities in the UK when it comes to the major research councils. The UKRI helpfully makes this information available and the data in relation to 2018–2019 which is the focus of this blog piece can be found here. The degree of convergence in terms of success rates across Universities was remarkable and suggestive of a largely random process when it comes to the peer review process itself. If, for instance, we just consider the 24 Russell group Universities as a starting point, the overall mean level of success last year came to 28%. This seems a little higher than normal but this point is not important for the purposes of this blog piece. What is important is that the success rates at each of the individual Universities were as a whole tightly clustered around the mean with the vast majority +- 4%.

The advantage of looking at Russell Group Universities is that with the exception of the London School of Economics with 53, in each case we have well over a hundred applications in total when considering submissions to all research councils (between 119 for Durham to 520 for UCL). Looking at the overall picture across Universities is much more informative than looking at any Department or Faculty in isolation given the small numbers of applications involved and the associated variability and uncertainty. The highest % success rate was attributable to Bristol and Exeter at 34%. Cambridge and Oxford come to 29 and 30% respectively but so too does Leeds (29%), Newcastle (28%), Manchester (29%) and Sheffield (30%). These all employ different internal peer review processes, but it doesn’t seem to matter very much in that the end result is largely the same in that they all converge to the same point.

As an aside, a priori, I expected much greater variation in success rates simply based on reputational effects. This may occur but based on this dataset there is no evidence that the Oxford and Cambridge’s of this world enjoy a substantive advantage at least in comparison to other Russell group Universities.

Rather than being representative of how random the peer review process can be, one could also argue that it simply captures the fact that the average quality of proposals submitted is comparable across all these institutes and that the internal peer review process helps ensure this. This argument does not hold up. Each University pursues their own strategy for internal peer review and while these Universities all pursue research of a high quality some are within the top 10 of the Times University rankings and others outside the Top 200. In these circumstances, one would expect much wider variation across them in terms of success rates.

As a further illustrative exercise, I examined the overall success rates of the next 24 Universities (non Russell group) with the highest number of overall applications. The number of applications by Universities in this category ranges from 43 (Bangor) to 149 (Lancaster). As one would expect when we get to much smaller numbers of applications, there is much more variation with a range from 15 to 35 percent in terms of overall success rates. Still the overall mean is 25%. To the extent that the average quality of a research proposal is higher in Russell group Universities, then this would support the view that the peer review process discriminates between proposals according to objective measures of quality, but and this is really key it would seem not by very much. A 3% difference in overall mean level of success between the Russell group Universities and this non-Russell group is fairly trivial in the scheme of things. If the top University’s in the U.K. (many of which are world leading) have a mere 3% higher success rate than non-Russell group Universities then this again suggests that success when it comes to grant applications is more reflective of a random process as opposed to one that is successful in identifying the most innovative proposals.

What if any lessons can we learn from this?

Increasing research income in an era of ever increasing competition is a difficult task and I don’t envy people involved in managing this process. As part of efforts aimed at generating a competitive advantage, it is becoming increasingly common, amongst Russell group Universities at least, for investigators to have to navigate their way through two panels (sometimes more) — one at University (often with multiple rounds of review) and one at research council level and each with their own idiosyncrasies. This places an extra burden on not just the applicants with the potential for very real disincentive effects, but also an administrative burden for others involved in the peer review process. We need to be as cognisant of these costs as we are of any resulting benefits from any internal peer review systems in place.

On the whole, I would still argue that internal (light touch) peer review may be a good thing, particularly for junior colleagues but we need to take care to ensure that the process is facilitative and collaborative as opposed to prescriptive and judgemental. I have experienced both. If Universities are looking to gain a competitive advantage or rapidly change their fortunes when it comes to overall success rates then internal peer review is unlikely to be a silver bullet — over the medium to long term success rates will likely converge fairly closely to the overall average. Once Universities are starting from a reasonable baseline (e.g. close to the overall average) it seems unlikely that they will be able to substantively change their fortunes as the lack of consistency across reviewers when it comes to judging the merits of a particular proposal will always be a barrier. I think this last point is key and often missed or misunderstood.

While internal peer review panels (or any review for that matter) will often provide interesting new ideas that may be helpful, the assumption that an internal panel/reviewers can consistently distinguish between proposals based on some objective metric of quality is far too simplistic. In the same way that two reviewers will often have very different assessments of a paper/grant submission, the same will be true when it comes to two distinct panels. This is all the more evident here in that the pool of suitable reviewers at University level for many proposed applications will be constrained which adds further randomness to the whole process. Of course it is easy to fool oneself by any uptick in fortunes after the imposition of a new internal peer review system or indeed other policy in response to a run of bad luck when any change in success may be simply nothing more than regression to the mean.

As an aside, if Universities were seeking to gain a competitive advantage then placing much greater emphasis on strategies aimed at increasing the sheer quantity of proposals submitted will likely be much more successful than unduly focusing on measures aimed at increasing perceived quality. Not necessarily saying this is a good thing from an overall societal perspective as of course it is a zero sum game. It is perhaps instructive to note here that the direct correlation between the total value of research income and the total number of applications amongst the Russell group Universities stands very close to 1 (0.92) which suggests at least amongst this group little by way of diminishing marginal returns. This again suggests anything such as onerous peer review panels which hinders people in submitting grant applications is likely to reduce the overall funding pie coming to a particular University.

How can people increase their chance of success?

Notwithstanding my earlier comments regarding the difficulty in obtaining agreement on the quality of a submission, quality will have an impact and so anything that you can do to improve your application will help. This might involve sending the proposal to trusted colleagues working in the area for feedback and advice particularly if at an early career stage. To the extent that quality is important, I would speculate that it is principally in terms of poor quality, i.e. reviewers are much more likely to garner consensus around what constitutes a weak proposal as opposed to discriminating between an excellent novel proposal and one that is the proverbial run of the mill.

Once you get over a certain baseline threshold level of acceptability then success or otherwise is I would suggest largely a crapshoot. Winning involves a big slice of luck, but of course the more lottery tickets (applications) you have the better off you will be. As an aside, I would make the same point to early career researchers looking to publish papers particularly in the economics field given what seems to be ever falling acceptance rates. Don’t put all your eggs in the one basket. There are also tricks of the trade which are important and which senior colleagues can help in that writing a proposal for research income is very different than writing a paper for peer review.

Should you apply?

This is dependent on individual circumstances and career strategy. A very simplified cost-benefit analysis might involve something like 600 hours to both write the proposal, get past internal peer review systems and subsequently develop 2 papers of 4* quality given the extra resources obtained (e.g. PDRA assistants) with the probability of being successful standing at 25% or 1500 hours to do the work yourself without any funding also leading to two papers of 4* quality. This might be the case when dealing with projects that don’t involve data collection and the main cost is time.

Of course you may need funding to collect data and there are a multitude of other benefits from obtaining research funding for an overall career development perspective which will the subject of a future blog post (costs of course too in terms of grant management). Having said that, the main point remains in that it will involve a careful evaluation of the pros and cons from an individual perspective which will partly depend on the incentives in place at your particular University/Department. Having to navigate through a ‘rigorous’ and or time-consuming internal peer review process may for some act as a very real disincentive. I will come back to ways in which I think Departments/Schools can better encourage grant submissions in a future post.

You can find more details about my research, contact details as well as other blog posts here: https://www.peter-howley.com/

--

--

Peter Howley

Doing research well is hard. With a focus on early career researchers Prof Peter Howley (Econ & behavioural science) will provide some helpful (hopefully!) tips