Wellcome Data
Published in

Wellcome Data

How the R number took over our lives — and what we can learn from it

This is a guest post for Wellcome Data by Gavin Freeguard, freelance consultant, based on interviews and research for a long read on the birth of the Covid-19 R number.

The emergence of the Omicron variant of Covid-19 has, no doubt, brought back many memories (and anxieties) from earlier in the pandemic — of our crash courses in armchair epidemiology and desire for the latest data, lockdowns (and the lifting of lockdowns), of cancelled Christmases.

But one thing it hasn’t brought back is our national obsession with the reproduction number, R, which dominated our lives for so long. It may still be ritually released and reported, but it has not taken over our political and personal conversations in the way it did towards the start of the pandemic. That makes it a good time to assess it and learn lessons from what it tells us about our pandemic response in general. This post summarises a longer read on the birth of the Covid-19 R number, which will be published in early 2022.

A slide from the Government’s press conference on 11 May 2020. Available under the Open Government Licence v3.0.

Where R comes from

R is a useful tool to understand how a disease is spreading. It tells us how many people, on average, a single person with a disease will spread it onto. An R of 1 means a person with the disease will on average only pass it onto one other person — the disease is stable in the population. An R below 1 means the disease is falling. But an R above 1 means it is spreading quickly. An R of 2 means an infected person on average passes it to two other people. And so the doubling continues.

Behind this simple number is a huge amount of complexity, a colourful past, and a great deal of work. It has its origins in malaria — from a surgeon moving a water tank to stop mosquitoes bothering him, through divisive debates about the place of mathematics in studying disease, from being called z0 to the more familiar R of today.

The number reported each week is the result of several groups of scientists building different models — representations of reality that try to understand the spread of disease and the effects of our attempts to fight it — since R cannot be directly measured. Going into those models are datasets including deaths, cases, hospitalisations, population data, symptoms reported via the NHS 111 online service, tests from donated blood, Google Mobility data about where your device is (and therefore where you are), swab tests, school attendance, surveys about our social lives and much else besides. All that data goes into those models which each spit out a result for R. The results from those different models are then combined and analysed, then discussed and debated by scientists, who come to a consensus view on what R is likely to be. Once signed off, that R number is published.

After nearly two years of doing that, what lessons should we learn about what worked well — and what didn’t?

Too much weight was put on a single number

Rather than being the breakout statistical star of the epidemic, R should have been part of an ensemble cast. It is useful to have a simple, easily digestible number to help people start to understand what might be happening — but, as the University of Edinburgh’s Mark Woolhouse told parliament, ‘The focus on a single R… has been a distraction.’ R is an average — it can hide variation between different settings (such as community spread versus care homes) and different groups (for example, different ethnicities) — and an index, inheriting the problems of all the data going into it. Other numbers — such as the growth rate (the change in cases from one day to the next) or information about the spread of Covid-19 in particular places (such as care homes) — should have been more clearly communicated alongside it.

In their speeches and strategies, the government appeared to make R largely responsible for decisions about imposing and lifting restrictions. The government’s five tests included lowering R; the country’s five alert levels were set by it. Nothing would be done that would allow R back over one, the Prime Minister told the country in April 2020 — ‘keeping the R down is going to be absolutely vital to our recovery’. The Times noted in January 2021 that ‘No other leading nation hinged policies directly on an epidemiological statistic.’

We need greater transparency about how data is used in political decision-making

That said, for all the weight government put on R in its public and parliamentary pronouncements, it is unclear exactly how it was used in assessing its five alert levels, five tests, five indicators, four tiers, three tiers, three steps and much else besides in our attempts to live with and lift lockdown. Parliament’s Public Administration and Constitutional Affairs Select Committee said it ‘struggled to establish who the Government sees as accountable for the data underpinning decisions on Covid-19’ and criticised its lack of transparency about how data was being used.

Transparency was an asset to scientists through the pandemic — GitHub repositories, summaries, academic papers, even coding and calculation packages helping inform and communicate the pandemic response — although there could have been greater openness, especially at the start.

Both science and politics need to think about their relationship with certainty and uncertainty

Politicians and policymakers often demand certainty, where science is about weighing up the evidence and its limitations. The nuance of scientific publications about R on the government website, GOV.UK, were full of ranges and limitations; the currency of ministerial communication is usually concision and clarity, even when the science — about a rapidly emerging new disease — was uncertain. Politicians need to become more comfortable with uncertainty.

But on the flipside, scientific structures and incentives — such as the long time it takes to publish articles — also slowed down the release of vital information, especially at the start of the pandemic.

We need to think more about where the data comes from, and invest in it

In the UK and elsewhere, reams of pandemic planning documents and several simulations discussed the importance of good quality data being rapidly available. But few properly thought about what infrastructure would be needed to make that a reality. Across government in the UK, emails, scraps of paper, reading things out from PDF documents and inconsistent information written in Microsoft Word documents were what passed for data infrastructure, rather than a usable, modern system.

As a recent select committee report put it, ‘For a country with a world-class expertise in data analysis, to face the biggest health crisis in a hundred years with virtually no data to analyse was an almost unimaginable setback’. While the industry and ingenuity of public servants in fixing this was impressive, it should not have been necessary. This is particularly true of a National Health Service which, in theory at least, is less fragmented than many others internationally. Several scientists I spoke to felt the UK wasn’t making enough of these advantages, and that nobody had made the case to the public about the insights that could be gathered from the collective use of their data. But the public are conspicuously missing from many discussions about how data could be used, as the controversy over the General Practice Data for Planning and Research (GPDPR) project showed.

The stories behind R

R may be an abstract number, but it has had a real impact on people’s lives. It dictated our movements — even if it wasn’t always clear exactly how — and dominated our conversations. The birth of the UK’s Covid-19 R number is an anthology of different stories: of the strengths and weaknesses of the scientific method and the interactions of science and politics, of the public thirst for information, of the trade-offs and nuances in distilling a hugely complex set of processes into one ‘simple’ number. There are good stories — of world-leading mathematical modelling and of rapid improvement — but also bad: the fact such improvement was necessary in the first place, the consequences of that (not having critical data at government’s fingertips at crucial moments) and some of the political decisions informed by R.

R may be abstract; one number that isn’t is the more than 160,000 deaths from Covid-19 in the UK and the impact on those they leave behind.

Focusing on the R number risks giving it its own personality, its own life and drive. But R is what we choose to make of it: the challenges in creating and communicating it, and taking action based on it, are the result of human decisions.

--

--

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Gavin Freeguard

Gavin Freeguard

Freelance, gavinfreeguard.com. All things (usually government) data. Dataviz etc newsletter at twitter.com/WarningGraphicC.