Open knowledge and knowledge exchange: balancing stakeholder requirements

Alick Deacon
Open Knowledge in HE
10 min readSep 3, 2020

UK universities occupy an odd place: not fully public sector, yet definitely not private. Government grants account for just 21.1% of total income (2017–2018), with 63% of that supporting research and 37% for teaching. Higher education institutions (HEIs) therefore need to make up the rest of their income through tuition fees, endowments and private investments. This mixture of funding sources means HEIs are often pulled in two different directions when it comes to accountability and openness. Research, in particular, suffers this problem. For research programmes funded through government schemes (usually through UK Research & Innovation — UKRI — and the constituent research councils), there are requirements to consider and report on the wider non-academic impacts through the entire project timeline. Up until March 2020, academics had to produce ‘pathways to impact’ statements in their initial research proposals before funding decisions were even made, and at the other end of the process, must report on the outcomes, outputs and impacts of the work for years after the project has ended. Even research findings that are purely academic are expected to be published in open access journals, enabling anyone to see what public money has funded.

Privately funded research, on the other hand, is often subject to non-disclosure agreements with industry partners, or other restrictions in legal agreements that prevent the open publication or other dissemination of commercially sensitive and proprietary knowledge and results. These projects are often classed as ‘knowledge exchange’, whereby academic expertise is translated to applications, and industrial business knowledge is used to commercialise research outputs into new products and services.

It is clear, therefore, that HEIs cannot instigate a blanket ‘open’ policy for all research — they rely on both public and private funding, each of which comes with different requirements. What makes the situation more complex, however, is that a significant proportion of research is co-funded by both public and private sources. Data from the Higher Education Statistics Agency (HESA), presented by Universities UK, show that in 2017–2018, 31.2% of knowledge exchange income (totalling £4.5 billion across UK HEIs) originated from public and third sector organisations. A further 36.1% came from ‘other’ sources, which includes collaborative research involving public funding. So while knowledge exchange by definition includes non-academic partners, many of the activities still rely on government grants.

The fact that more than one third of knowledge exchange activities is classed in the somewhat vague ‘other’ group by HESA, exposes another problem with the lack of openness regarding this type of activity. I have already discussed this problem in brief in the Medium post The dangers of assuming data are ‘open’ when using them to measure performance and success. Here I noted that it is virtually impossible to measure an institution’s knowledge exchange metrics (for example, number of external partners, number of academic staff engaged, amount of KE income by source) without a dedicated role to do so, preferably at the discipline level. This person needs to understand details of projects and definitions of knowledge exchange to be able to classify projects correctly. While my own bias in this opinion may be self-evident from the fact that I hold such a role, I believe that the statistics I present support this view.

A sector-wide issue

While having such a role helps an individual HEI or department collect more detailed (and hopefully by extension, more accurate) statistics, there remains the sector-wide problem of comparison between different HEIs, or even different disciplines within a single university. This is a problem of standardisation, that there is no agreed set of definitions on what constitutes knowledge exchange, a fact acknowledged in HESA’s own HE-BCI reporting.

This issue becomes even more pressing in light of the Knowledge Exchange Framework (KEF). According to the overview on the Research England website the aims of the KEF are as follows:

“to increase efficiency and effectiveness in the use of public funding for knowledge exchange (KE) and to further a culture of continuous improvement in universities. It will allow universities to better understand and improve their own performance, as well as provide businesses and other users with more information to help them access the world-class knowledge and expertise embedded in English HEPs.”

In other words, the KEF is an exercise in benchmarking knowledge exchange activity in different HEIs against one another, providing transparency over use of public funding.

The framework itself has been under development for many years; it was first proposed in October 2017 by the then Universities Minister, Jo Johnson, and included in the Industrial Strategy white paper published in November 2017. Work since, has included a consultation with HEIs to determine the most appropriate definitions of knowledge exchange, and the best metrics for assessing these activities.

In January 2020, Research England published a report detailing the decisions on the first iteration of the KEF. This includes a list of data sources for the metrics under consideration, the vast majority of which are drawn from existing HE-BCI data tables. What is worrying from an ‘open’ point of view, is that these data purport to be open, and yet even a cursory look at the tables available via HESA reveal alarming inconsistencies and gaps in the provision.

For example, while some statistics are provided on an individual HEI basis, others are grouped together on a regional basis. This means there can be no direct comparison between, for example, income from business and community interactions and income from collaborative research involving public funding. The former is amalgamated data for each UK country (England, Northern Ireland, Scotland, Wales), while the latter is given for each individual institution. As already mentioned, HESA themselves acknowledge that data for many metrics are subjective, as institutions may interpret the definition of terms differently.

Some of these potential discrepancies are acknowledged in the January 2020 KEF report:

27. The ‘in-kind’ contribution to collaborative research will be excluded from the first iteration of the KEF in response to concerns raised over variation in practice in the recording of in-kind contributions. This will be revisited for future iterations of the KEF.

The KEF website also acknowledges that the process of defining metrics in still evolving:

The major review of the HE-BCI survey announced by HESA is likely to provide new metrics related to knowledge exchange which could feed into future iterations of the KEF. We will continue to work closely with HESA on the review.

Addressing a national issue

As I discussed in my previous post, to be truly open the data should adhere to the FAIR principles: findable, accessible, interoperable and reusable. While HESA data generally do well on the first two of these, the HE-BCI data currently fall short on interoperability and reusability. In a large part this is down to inconsistencies in definitions and the way data are self-assessed and reported by institutions themselves, with no independent verification. One of the most significant hurdles to overcome is that of confidentiality regarding industrial collaborations. There are simple solutions to this: universities nationwide could agree on a consistent definition for what constitutes a collaboration, i.e. does it require a legal agreement, an investment of funding by the partner organisation, a commitment of time by employees from both partners, or any combination of these things? Then these collaborations can be broken down into sub-definitions for the type of engagement (consultancy, secondment, CPD and training, contract research, etc). This would then allow universities to tackle the single greatest problem in measuring the activity: collecting reliable and accurate data. As I noted in my previous post, the act of labelling activity as ‘knowledge exchange’ often falls to research support staff, whose area of expertise is not in understanding what the term means. This is not to disparage people in those roles — they should not necessarily be the ones upon whom the burden rests. However, it is important that someone is officially tasked with that job, that they do understand and adhere to the same set of definitions and principles, whatever department and institution they are a part of. The data do not need to name industrial partners, or attribute financial transactions to them. Information on levels of engagement on a sector-by-sector basis would no doubt be interesting and useful comparators, as would a breakdown by company size. But the most crucial thing is that the data are complete and trustworthy, not the current piecemeal representation that can be woefully misleading of the true picture.

Risks: how does behaviour change in response to changes?

Another risk facing the KEF is that it could fall into the same trap as it’s more established cousin, the Research Excellence Framework (REF). This aims to assess HEIs’ research capabilities, and is a significant determinant of research income through the ‘quality-related’ (QR) funding stream. The REF assesses research quality according to three main measures: outputs (typically publications in peer-reviewed journals), impacts (the wider socio-economic changes resulting from specific research projects), and environment (how conducive an HEI’s set-up is to producing high-quality research and researchers). The first full REF assessment came in 2014, though this was itself a rebranding and expansion on the previous Research Assessment Exercise (RAE). HEIs are currently in the final stages of preparing submission for the next assessment in 2021. There is already evidence for institutions gaming the system to boost their results from the REF. One could argue that if an assessment system were purely objective this would not be possible, and results would accurately reflect the research quality regardless of attempts by the submitting institution tomanipulate figures and narratives. However, there are ample degrees of freedom in how an organisation chooses which data to submit, and furthermore, this flexibility has changed between the 2014 and 2021 exercises. For example, in 2014, not every member of research staff had to be submitted. Therefore, an institution could decide to submit only those it was confident of having sufficient high-quality outputs or impact cases to boost its overall REF score. The trade-off with this, however, is that one of the headline measures published in the REF results is the overall research ‘power’, which uses the number of staff submitted as a multiplier. Therefore, the higher the number of staff returned, the higher the score. In 2021, the definition of research staff has been tightened, and all staff falling within the definition must be returned. This is expected to increase the number of full-time equivalent (FTE) staff submitted by around 43%. While this has gone some way to prevent the same gaming as in 2014, it does not prevent HEIs taking more severe action against returning under-performing academics. For example, they could move those staff to teaching-only contracts, or remove them from payroll altogether — i.e. sack them! Even without such drastic measures, the mechanism for measuring outputs is still open to manipulation. As the sheer volume of publications over a REF assessment period (six to seven years) can be very high, an institution is required to submit just a selection of what they consider to be the best. The REF guidance document for 2020, includes the following stipulations for this selection:

205. Submissions must include a set number of items of research output, equal to 2.5 times the combined FTE of Category A submitted staff included in the submission.

207. The submitted pool of outputs should include:
a. A minimum of one output for each Category A submitted staff member […]
b. […] A maximum of five outputs may be attributed to an individual staff member […]

Many institutions have a high number of academics in many ‘Units of Assessment’ (which roughly align to discipline areas), and so have a great deal of flexibility regarding which publications to submit. This becomes even more complex when you consider that many publications are co-authored by several academics in a single research group, and so could be submitted for any one of those individuals. However, smaller institutions have less flexibility in this decision-making process.

My point here, is that even assessments that are majority metrics-based are open to manipulation by those under scrutiny, and so the existence of the KEF itself does not provide reassurance that the data on those metrics will become any more open. Perhaps it is my background as a physicist that leads me to the conclusion that this is somewhat analogous to the observer effect, in which the or measurement of something changes the phenomenon under observation.

Was creating the KEF the wrong kind of government intervention?

In conclusion, the KEF may be a step in the right direction for increasing accountability of knowledge exchange activities between HEIs and their non-academic partners. But it is no silver bullet to the problems identified above. The range of metrics identified in the first iteration report are appropriate in their objectives. However, more work is needed to ensure standardisation in the way they are defined and collected between different disciplines and different institutions.

Finally, to enhance the accountability to the public, HEIs should be bolder in their approach with private industry. For too long, universities have taken the approach that they need the investment of private funders more than those partners need the services, facilities and expertise that universities can offer. A shift in culture is required such that industry accepts that, by collaborating with organisations in receipt of public funding, their collaborative research is subject to scrutiny and openness equal to any other university work. Others, too, have discussed the need for businesses to recognise they are the beneficiaries of publicly funded research whenever they collaborate with universities. This need not mean a loss of intellectual property rights or commercial advantage. One only has to look at high profile cases such as Apple vs Samsung to see that companies are already adept at defending rigorously their IP. What it requires, however, is further government intervention in a form that private industry understands: Research and development (R&D) tax credits are an example already used to boost company participation in schemes such as Knowledge Exchange Partnerships. This approach should be expanded further, such that companies are more used to this approach, and see a genuine financial benefit to engaging with universities, and to the results of such engagement being open to all people.

--

--

Alick Deacon
Open Knowledge in HE
0 Followers

Writer, drummer, cyclist, physicist. Father of two, husband of one; brother, uncle and only son.