The academic research funding system: as it is now

Coalfacer
13 min readJan 8, 2019

--

The system for evaluating, funding and applying research has not evolved for decades and in many respects, the fundamentals applied centuries ago still hold sway today, with only modest tweaks, if any at all.

This paper examines the sources of funding used to pay for academic research, the metrics applied to allocate those funds and the perversities those metrics encourage.

The public purse: grant funding

Academic researchers rely on public grants to fund research . Despite dismal success rates and the very high cost incurred in preparing applications, it remains a vitally important source of funding for research.

While each grant specifies eligibility and award criteria, the generic references they typically make do little to spotlight compatible and relevant research. The standard assessment criteria generally adopt citation status or the broader quantitative performance metrics that were introduced to replace citation as the sole metric used. Each of these options is considered below.

The H-index

The h-index is an author-level metric that attempts to measure both the productivity and citation impact of the publications of a scientist or scholar. The index is based on the set of the scientist’s most cited papers and the number of citations that they have received in other publications.

Frequent publication with high citation occurrence continues to be fundamentally influential in the allocation of the public purse.

In order to have work published, researchers need positive results. That puts pressure on scientists to pick safe topics that will yield a publishable conclusion — or, worse, may misrepresent their research so as to improve the chance that it will be published.

In order to enhance publication status (for the purpose of enhancing prospects of success in grant rounds), academics have developed sophisticated (and not so sophisticated) techniques to maintain ranking-status in peer reviewed journals. While many achieve this through sustained, long-term effort, bibliometrics are widely opened for gamification. The pressure to publish is frequently linked to the decline in quality and integrity of publication practices. Meanwhile, research shows a high correlation between the number of papers a researcher has published and the number of citations received, which incentivises researchers to pursue this metric.

The very idea that long term publication success is a fundamental necessity favours the incumbents (persisting on their course) over the innovators in fields where paradigm shifts are exciting for science but potentially disruptive on the industries that have been built on the existing principles.

What’s wrong with bibliometric funding criteria?

  • If the h-index were a worthy scoring system, it would follow that the best funded researchers, were also the best published. It’s demonstrable that financial input is not linked to bibliometric output in any measurable sense.
  • Citation metrics such as h-index, m-index or g-index, where used in isolation, can potentially be misleading when applied to the peer review of publication output, as they do not describe the impact, importance or quality of the publication.
  • Whilst the body of knowledge would benefit from the negative results being shared, the system does not reward or facilitate that exchange.
  • The narrative around reproducibility and whether it constitutes a crisis or is a call to establish a framework that would empower improvements in publication practices, evidences the extent of the problem with the current system as a measure for impact. The UK experience with the Research Excellence Framework shows the magnitude of the task if it is to be shaped through the political prism.
  • The cost of this approach endorses incumbents (Kuhn’s ‘normal science’) at the expense of innovation (Kuhn’s ‘paradigm shifting discovery’).

Why is it so difficult to let go of this metric?

The public purse is fearful of an inexact science. Broader impact factors introduce factors that are not directly comparable. That is particularly so where results of that research may not be available within the election cycle. Hesitancy to accept research risk is indulged, in part, because the science community struggles to justify its pitch for funding to the average taxpayer, and the politicians responsible for lobbying treasuries on its behalf.

Quantitative performance metrics

The shift to embrace blended impact metrics was made by policy makers as a means to demonstrate reward for impact metrics (without any real compromise over the need for metrics that are capable of precise calculation). In the name of reform, many grants now count (easily countable) ‘impact factors’ in order to use a blended score in assessing funding applications.

For example, Australia’s National Health and Medical Research Council uses the following factors in assessing applications:

  • the number of citations of individual publications
  • success in obtaining peer reviewed grants
  • contribution to translational outcomes such as patents
  • commercial output
  • public policy or implementation of change in practice
  • invitations to conferences
  • mentoring, leadership speaking engagements
  • numbers and types of prizes and awards
  • contribution to the research community

Publication count, citations, combined citation-publication counts (e.g., h-index), journal impact factors, total research dollars, and total patents now form part of the blended mix of readily countable metrics used in most awards.

What’s wrong with blended metrics system of funding criteria?

  • Many of the factors routinely used grandfather the attributes they were designed to remove. Ultimately, the well intentioned use of quantitative metrics may create inequities and outcomes worse than the systems they replaced.
  • Patent metrics rarely take account of the financial cost of obtaining or protecting them. For the technology transfer offices that typically bear responsibility for deciding on protection, whose budgets are limited, the failure to recognise the economic reality that comes to bear in pursuing these marks of status bears hallmarks of Galton’s early work (which took no account of class in his assessment of genius).
  • The blended metrics system continues to incentivise grant applications for work that might be funded, rather than work that should be done for basic advancement of knowledge. Researchers are pressured to take incremental, safe steps toward applied outputs, instead of exploring breakthroughs in basic science.

Why is it so difficult to let go of this metric?

There is no substantive improvement on the efforts to assess research proposals against benchmarks for excellence in a field, innovation or potential. Instead, it diversifies the range of countable factors considered in assessing an application.

In order to move away from the safety of the precise calculations and neat charts that quantitative metrics can offer, scientists are increasingly organising around efforts to lobby those in the halls of political power to exercise funding decisions with cognisance of the characteristics of research.

Alt-metrics

Whilst politicians grapple with ways to relax their reliance on a precise accounting system, others are focused on broadening those calculations so that they reflect human influences (such as parenting absences — which have the natural effect of reducing a publication count over a recent look back period), and to incentivise moves between academia and industry (to redeploy the oversupply of PhDs).

The science community is looking for creative ways to address the need to demonstrate impact on a fundamental level. These efforts include:

  • formal initiatives, such as the San Francisco Declaration of Research Assessment which recognises the need for improving ways in which outputs of scientific research are evaluated and calls for challenging research assessment practices, especially the journal impact factor, which are currently in place.
  • practical tools, like Impact Story and Deepsy which provide wider attribution of research contributions and the Transparency Index, which could be included in the quantitative metrics count to hold publishers accountable for publishing poor research and by doing so, offset some of the credit afforded to research shared via that channel.
  • the recognition of new proxies, such as the use of social media as a means to share research in a form that can be measured as an indicator of success in diffusion of research. For many funding assessment bodies that are considering taking account of these metrics, they are seen as a proxy for willingness to engage outside academic circles and this willingness is being attributed to a fit for commercial partnerships.

The new normal is proving difficult to embrace for those who exist in it.

In the context of the wider landscape

  • Demand exceeds supply to such an extent that it is difficult to incentivise researchers to prepare applications. Questions as to whether the value of research progress exceeds the cost of running the grant process are being asked.
  • The funding system is nominated by the research sector as the key structural issue and the area requiring the most urgent policy reform (above the need for issues arising out of the fundamental uncertainty and sustainability of the research workforce). Read more.
  • Even for successful grant applicants, this source of funding is insufficient to meet the cost of research programs.
  • Evaluating the success of publicly funded research is an important part of the cycle. It is not being done with any accountability so the results of such evaluations do not inform future decision making or risk allocation approaches.
  • The funding bodies are conservative to the point of timid — to the point where many researchers identify a need to be most of the way done in solving a problem before it will be eligible for grant support and to undertake research that will produce good statistics rather than research that asks questions that matter.

Even scientists with good ideas and a history of progress must now spend many, many hours applying for grants. Researchers have to apply for an average of six NIH grants to be awarded a single one.

When you apply for federal grants at a place like the National Institutes of Health the game is that you propose to do what you’ve just done. Everybody knows that, though they won’t say it.

At the National Institutes of Health, if you haven’t completed two thirds of your research, you’re probably not going to get a grant, because everything is so competitive and so cautious.

Government research is powerfully conservative. I’ve been an NIH researcher for decades, and to get an NIH grant today you essentially have to already have solved the problem in question.

Read more.

incrementalism has taken hold of the academic world and it’s a symptom of the way money is allocated. Because of budget issues, he said, the government tends to back projects that are sure things. The things that are already so obvious, experiments where the outcome is already so predictable aren’t really that interesting

Where research costs are not covered by a grant, or an application is not successful, other funding sources are sought, including philanthropy, industry and impact and venture investors are sought.

Where research is conducted in a university, education budgets are also repurposed. In Australia, it is estimated that 20% of research funding is sourced from surpluses ‘found’ in education budgets.

Endowments

In 2015, US endowments were valued at US$547 billion. Prestigious universities use endowments to support research programs. For Harvard, the endowment is made up of 13,000 funds and remains its largest source of revenue. Even with the endowment, Harvard still funds ⅔ of its budget with grant funding, gifts and student fees.

There are few institutions in a similarly privileged position. Most university endowments are modest and have governance challenges.

The balancing act for endowment managers is to fund the operating budget to support the purpose for which it was created with a stable and predictable distribution, and the obligation to maintain the long-term value of endowment assets after accounting for inflation, in perpetuity. The annual payout is around 5%, being the minimum amount required to satisfy legal requirements of US tax law.

While the 5% is a valuable resource for those with access to it, this capital can do more. The Ford Foundation offers a blueprint for the 95% being applied toward impact and mission led research allocations to further the purpose of the endowment whilst maintaining the capital base.

The debate around taxing these endowments continues to focus on the value of the tax shield rather than the transformative potential of this capital if it could be invested in vision aligned goals in research and education.

Philanthropy

Philanthropy represents 16% of the funding for UK university research.

Governments and multi-national institutions alike are increasingly encouraging philanthropic investment in research.

The transfer of decision control that previously sat with central governments, to the stewards of the philanthropic investment committees is shifting the shape of the research landscape in a material way. The rules for deciding what’s important are becoming the domain of those with the ability to write cheques. Whilst government and industry research funding allocations are topics which come under regular analysis, debate and scrutiny, the impact of science philanthropy is comparatively overshadowed. Perhaps this is because the metrics used to allocate government (quantitative metrics in fields of national priority) and industry (research with potential for application) funding are more readily measurable — relative to a measure of impact against a public good objective.

An example of philanthropic success can be seen in the National Multiple Sclerosis Society’s focus on identifying, prioritising, and shepherding translatable discoveries, which has played a pivotal role in bringing MS drugs to market. The Michael J Fox Foundation has supported 51 clinical trials in the Parkinson disease field. By supporting innovative pilot studies, investing in proof-of-principal /early failure studies in drug discovery, educating patient communities, and developing centralised registries, these trailblazers have set the stage for smaller nonprofits to follow suit strategically placing their funds where they are needed the most.

The upside of philanthropic engagement in research is that its tolerance for political risk, and relative lack of red-tape, means that it can support science that is ineligible or unsuccessful in the public grant rounds. It may be also available to support niche areas that do not form part of a national priority list and do not receive public grant support and global challenges that need industry engagement but in respect of which the commercial returns on investment do not incentivise participation by other funding groups.

Optimal outcomes can be observed where there is an alignment of interests between those driving the investment and those undertaking the research, which brings patient capital and good governance to a research program.

Challenges in this field are around sustainability. There are no clear governance standards that signpost vision and values alignment. Governance questions around the control of the research agenda, and commitment issues to a research program are being asked with greater sophistication by participants from all segments of the economy. Information asymmetry puts the funding committees at a disadvantage (relative to other groups) when it comes to scouting for research to back. As networks strengthen, these funders should stand to leverage their capital against the assets of others in the ecosystem, so as to improve investment decisions and in capturing returns.

Philanthropists, who often have generated their wealth through corporate success, do not behave like government donors. When speaking of the Bill and Melinda Gates Foundation grant system:

It is not like an academic grant, where you are given the money and if one avenue doesn’t work you change direction or, if you find something a little bit more interesting, you do that.

Transparency about research governance and commitments by each group involved, is critical to the identification of alignment between objectives and incentives. The scale of philanthropic investment in research is escalating the need to advance the discussion about how to standardise impact measurements and governance protocols so as to sustain this activity.

Indicators are that the majority of funding goes to the institutions topping the league tables, and is primarily given to support research in biological science. As the market matures, impact metrics should influence allocations to better identify where the capital is going to be most effective.

Industry

Private funders are in a unique position to fill funding gaps in the academic research pipeline. As competition for public funding blows out of all proportion, researchers have increased their efforts in courting industry partners.

Where this works, it is vision driven capitalism operating for mutual benefit. However, the field is fraught with governance issues that need to be addressed in a transparent, robust form.

Fast growing, forward looking companies realise that research is key to advancement. When it comes to research excellence, links with academic research teams are recognised for their potential to give industry partners a competitive advantage. For many businesses, collaboration also represents an opportunity to access facilities, data, materials and intellectual property. Where this works, the customers are repeat collaborators. Access to a pool of talented graduates for potential recruitment, the development of new techniques or processes that could enhance business efficiency and de-risking investment in new areas of research are motivating factors commonly identified by industry as reasons for academic research collaboration. It also extends a firm’s network and enabling it to obtain a wider range of insights, unconstrained by the company paradigm.

Pharmaceutical and biotechnology for-profit companies are responsible for ~60% of research funding in that field, in the US. These industries invest in basic discovery but focus the bulk of their efforts on transitioning their own discoveries and those funded by federal governments into applied medicines.

There is mutual benefit to be realised, if the issues can be addressed in a meaningful way. Although funding is critically important, the substance of the issues runs deeper.

The costs of setting up collaborative programs, misaligned objectives and incentives, and the ability to connect with the right person and the basis for allocating responsibility, risk and reward, makes this a field with narrow participation.

Typically, policy responses to these issues are fractured. Tax incentives, matched government funding and discrete transaction interventions do not meaningfully move toward a substantive solution.

The collaboration tax can be seen by academics as the imposition of tight milestones, managed timelines and go/no-go criteria. The restriction of liberty to change scope and pursue side findings represents a shift. The requirements for reporting increases the workload. For this to be a benefit to academics, the rewards need to justify the effort.

The risk of (actual or perceived) bias embedded in corporate sponsored research is strong. Little has been done to formalise governance protocols to protect against it or adopt relational contracting principles in approaching transaction structuring.

Studies of personal relationships between executives at leading universities and the sources of industry funding that flowed to their research teams show an increased convergence between the research fields of a university and the science fields of the corporations to which trustees are connected. There is evidence that the number of university trustees connected to science-based corporations positively influences the amount of R&D funding a university receives.

The academic research sector is ripe for revitalisation. It has strong assets and a motivated community. The challenges in the funding system are solvable. In our view, it starts with governance. From there, we can align assets with values to better design a sustainable research economy.

Sign up to Coalfacer to dive deeper into these issues, and receive our insights into how the research funding economy can better support research translation.

--

--

Coalfacer

create, fund and translate academic research • academic engagement at the coalface of industry