Coronavirus: the key numbers we must find out

How many people got Covid-19? How many died? We just don’t know. And we’re not even trying. Let’s start fixing this.

David Bessis
15 min readMar 26, 2020

Every prediction you read about the size and duration of the coronavirus pandemic is based on epidemiological modelling. The underlying math is pretty simple. You don’t need to understand it, you just need to know this: once you set the values of a few basic parameters, predicting the timeline and magnitude of a pandemic can be done with a few lines of code.

In other words, if you want to build an interactive widget like this well-designed Covid-19 calculator, the hard part is the web interface and graphics, not the pandemic computations:

Gabriel Goh’s online pandemic calculator: pick your parameters and predict the impact

Simple math projects an appearance of control and predictability, but there’s a catch:

The model is worthless unless you get the core parameters right.

Some parameters have credible data-backed estimates, notably when they can be assessed by following a small number of patients: the length of the incubation period, the length of the hospital stay, the time from the end of the incubation to death (for those who die) or recovery (for those who survive.)

But for other parameters, we only have wild guesses. For example, as shocking as this may be, we simply don’t know how deadly Covid-19 really is.

The Case Fatality Rate is defined as the proportion of deaths compared to the total number of people diagnosed. It looks like a clean definition. It is not. Just look at the situation as of 2020–03–23:

  • in Germany, there had been 22,673 cases, for 86 deaths; CFR= 0.38%;
  • in Italy, there had been 63,927 cases, for 6,077 deaths; CFR=9.51%.

Does it mean that the virus is 25 times more deadly in Italy compared to Germany? Of course not. It just means that the metric is broken. Among many explanations for the discrepancy, the most obvious one is that Germany has a broad testing policy, while Italy is overwhelmed by the epidemic and only tests the most serious cases: this artificially inflates Italy’s CFR.

Three issues put CFR estimates off-track:

  1. Under-diagnosis of non-fatal cases. This factor is reassuring and it is much talked about. Once we factor in the mild and asymptomatic cases that remain below the radar, the measured mortality rate will inevitably go down. Clearly Italy isn’t diagnosing all its Covid-19 cases. Even the most active testing countries such as Germany and South Korea are probably missing a good fraction of their mild cases.
  2. Under-diagnosis of fatal cases. This factor is very scary, and I’m surprised that it hasn’t been more widely debated. For Italy, China, Spain, Iran, France and the US (the six countries with the highest tolls as of today,) there are compelling reasons to believe that official death counts massively underestimate the reality. We’ll explain why.
  3. Lag between diagnosis and death. If it takes two weeks from diagnosis to death, then we should divide the number of deaths as of today by the number of cases two weeks ago.

Under-diagnosis of non-fatal cases, especially of mild cases, is a big issue because it isn’t quantified at all. There are positive scenarios where this factor is massive: some people predict that, once we account for all undiagnosed cases, we’ll end up with a real CFR around 0.5%, close to what is currently measured in Germany. In these dark times, everyone is hoping for the most optimistic scenarios, however we must admit that we miss the hard data to back them. There are reasons (notably the lag between diagnosis and death) to believe that Germany’s CFR is artificially low and will rise. South Korea is the country with the most pervasive testing and the most advanced societal response to the crisis: right now its measured CFR is at 1.42% and rising.

Under-diagnosis of fatal cases is possibly an even bigger issue, not just from a public health perspective, but also from a public trust perspective. There are ways to fix this. If governments don’t act quickly, it will become a major public scandal.

The core message of this article is:

We don’t know how many people have been infected by Covid-19.

We don’t know how many people have been killed by Covid-19.

We’re not even trying to find out.

Yet there are simple ways for us to get a clearer picture.

As the world has entered confinement on an unprecedented scale and for an unknown duration, with social and economic costs that we cannot even conceptualize, properly calibrating the core metrics of the pandemic is becoming a scientific emergency.

Wishful thinking feeds complacency (an extreme version of which being the “it’s just the flu” delusion.) But irrational gloom and doom can also inflict unnecessary harm to societies. Right now, the level of uncertainty on the core metrics of the pandemic creates the perfect conditions for conspiracy theories, unhealthy politicization and bad public decision-making.

Western democracies should also be mindful of how the world will look back at their response. A big lesson from this crisis is that China underdelivered in terms of transparency but impressed with its decisiveness and ability to execute. By contrast, Western democracies squandered their two months observation advantage and failed to properly execute on initial containment and mitigation. If they also underdeliver in terms of transparency, they may simply lose their credibility and relevance.

1. China’s numbers are impossible

Covid-19 deaths in China (source: Wikipedia, retrieved on 2020-03-24.)

In just a few months, China has produced massive scientific knowledge on Covid-19 that will benefit the rest of the world. But as far as epidemiology is concerned, there is one big problem: China’s numbers just don’t add up.

On 23 January, China put Wuhan in lockdown. In the following days, dozens of videos reached Western social media: crowded hospitals, dead bodies in the streets, body bags stacked in hospital corridors and inside vans. Early in February, there where many signs of stress on the funeral services in Wuhan. Crematoriums were working 24/7 and asking for extra body bags.

Let us put the numbers in perspective. Wuhan has about 10 million inhabitants and its metropolitan area is twice bigger. Applying the average mortality rate for China, you’d expect an average of about 200 deaths per day in Wuhan, 400 deaths per day in the metropolitan area. Of course there are day-to-day fluctuations and funeral services are designed to cope with more than that.

  1. It is not credible that around 50 deaths per day could cause that much stress on funeral services.
  2. It is not credible that the dozens of videos of dead bodies in Wuhan’s hospitals and streets in January could represent more than a small fraction of the actual death toll.
  3. It is not credible that Fang Bin and Chen Qiushi could have observed what they documented in their January videos just by walking around a 10 million inhabitants city experiencing less than 50 deaths per day.
  4. It is not credible that China could have taken such drastic action from January 23rd based on what were the official metrics of the epidemic back then.

China’s censorship is nothing new and I am not expecting their official numbers to be corrected anytime soon.

But my concern isn’t censorship. My concern is the poor methodology (and, in all likelihood, the intentionally poor methodology) that is behind China’s impossible numbers.

2. Deaths ‘caused by’ vs deaths ‘attributed to’

In fact, all countries are basically applying the same flawed methodology. A consequence is that we are massively under-reporting Covid-19 deaths, and we’re also maintaining an artificial bias in our understanding of the disease and its health impacts across distinct age groups.

What countries are reporting are deaths that are attributed to Covid-19.

A death is attributed to the virus when a patient dies after having been diagnosed with Covid-19, or when a post-mortem diagnostic is performed.

Not all deaths caused by Covid-19 are attributed to Covid-19.

If there is no doctor around to make a diagnostic, or no test is available, or the test is a false negative, then a Covid-19 death will not be registered as such.

Looking at the January videos from China, it is easy to understand why this approach is flawed: when hospitals are overwhelmed and cities are in lockdown, people do not reach hospitals; they die undiagnosed.

The death toll isn’t just underestimated, it is also biased: the victims who die in hospitals and are included in the stats may have different characteristics than the victims who die at home or on the sidewalk.

This phenomenon is called selection bias and it is pervasive in all the Covid-19 scientific literature. Both the scientific community and policy makers must keep in mind that everything we think we know about Covid-19 is possibly distorted by selection bias.

By design, all published clinical studies on Covid-19 are based on the patients that doctors have been able to follow, not the patients that they couldn’t follow.

Many early videos from China (and later from Iran and other countries) showed dead people on sidewalks, shopping bags in hand, suggesting sudden death possibly by cardiac arrest or neurological damage. Sudden deaths are likely to be missing from hospital stats. What if Covid-19 deaths in younger patients were much more abrupt than in older patients, as some reports have suggested? In that case, selection bias would create the illusion of a lower death rate in younger people than it really is.

3. Western democracies must open-source their raw mortality data

By contrast with powerful China, isolated Iran was an easy country to name-and-shame. Like China, their Covid-19 death count is impossible. Satellite pictures of fresh mass graves provided a much publicized smoking gun.

But the death counts in Italy, Spain, France and the United States are equally doubtful. Like China, these countries report coronavirus deaths based on attribution: patients have to be diagnosed to be included in the count.

Wherever the pandemic is causing disruption to health systems, attribution-based death counts will be incomplete and unreliable.

There is an alternate approach to sizing the pandemic death toll and it should be implemented without delay. (The attribution-based approach should be continued and there are ways to improve its coverage. Despite its shortcomings, it has value. The two approaches will complement each other.)

The alternate approach is called incrementality. In digital marketing, incrementality studies are regularly used as complements to attribution-based measurements, when trying to assess the impact of marketing campaigns. It would be shocking to apply to our public health lower standards than the ones used in digital marketing.

Let me explain how the incrementality approach works by looking at the situation in Italy.

In 2019, about 647,000 people died in Italy, an average of 1,773 per day. On March 21, Italy reported 793 deaths attributed to Covid-19. Assuming that all deaths caused by Covid-19 were reported, you’d expect to see a bump of nearly 50% in Italy’s overall death statistics on that day, compared to last year. If we look at the data, we will see a bump that is clear and sizable.

How do you get an estimate of the true number of Covid-19 deaths? Well, look at the difference in total deaths in Italy between 2020–03–21 and 2019–03–21. This difference is a reasonably accurate estimate of the actual death toll of Covid-19 in Italy on 2020–03–21.

In Lombardy, which accounts for two thirds of reported Covid-19 deaths but only 15% of Italy’s total population, you’d expect to see a bump of 200% in total death statistics compared to last year, just based on attributed cases. Sadly, it is likely that the actual bump is much bigger than 200%.

This data is available to Italian authorities and it is too important to be kept secret.

The incrementality approach does have a few shortcomings, but this isn’t an excuse for not sharing the data:

  • By contrast with the attribution approach, the incrementality approach doesn’t tell you who died because of Covid-19: it just tells you how many people died.
  • For the analysis to be robust, you need the measured effect (the coronavirus death toll) to be greater than the natural random fluctuations occurring in the data. Many factors such as the weather, the seasonal flu epidemic, economics, and even social events such as high-stakes soccer games, will cause fluctuations (aka ‘statistical noise’) in the day-to-day death count. Unfortunately, there is zero doubt that in Lombardy (and in a growing number of regions in Italy, Spain, France, the US and many other countries) the signal-to-noise ratio is strong enough for the analysis to be robust.
  • The incrementality approach will measure the net impact of the pandemic and all its side-effects. As health systems are collapsing, people will die for other reasons, eg because they cannot have a needed surgery or because they’re not resuscitated fast enough after a heart attack. These extra deaths will be counted in. Conversely, the confinement will reduce the number of traffic accidents, reducing the measured increment. But it could increase the number of suicides. All these effects will be blended together in the measurements. Yet it might be possible to separate the different factors based on death certificate data and cross-calibration across regions. In any case, the net impact of the pandemic is a meaningful metric that must be tracked.

Of course everything must be done to improve the quality and reach of the existing attribution-based approach, but it is impossible to seriously track this pandemic without also monitoring the most complete and most readily-available data: the raw mortality data.

Italy, Spain, France, the United States and all significantly impacted democracies should immediately open-source their day-to-day raw mortality metrics, in real-time, with maximum geographic and demographic granularity. They should share this data for 2020 from at least as far back as 2019, and ideally from prior years so that statistical noise can be estimated. Governments should also share any available ’cause of death’ statistics.

My proposition may sound provocative, but I really think that this is their best option.

The only downside of transparency is that knowing the true mortality could scare the population. But honestly what do we have to lose? People are already in lockdown, the economy has already stopped. It may sound cynical, but telling the truth, especially if it is scary, might actually help with properly enforcing the necessary confinement.

In any case, if this data isn’t released now, the true death toll will eventually be known, when future incrementality studies are published. Sooner or later, demographic data will have to be published. After the European heatwave in 2003, demographic studies demonstrated that the impact had been bigger than initially recognized by authorities. The delay in acknowledging the truth proved very damaging for governments.

By not sharing this information in real-time, governments are delaying the scientific community’s ability to properly model this pandemic. They are delaying their own ability to make the right decisions, and they are opening the door for all sorts of conspiracy theories.

The stakes are too high. The disconnect between official tallies and what people see on social networks is too obvious. Compared to China, Western democracies have underdelivered on preparedness and decisiveness. They must beat China on transparency, or they will lose credibility.

4. Fixing the denominator: how to track mild and asymptomatic cases?

With current estimates of Case Fatality Rate, both the numerator (number of deaths) and the denominator (number of cases) are off-track. At this point we don’t know by how much they are off-track.

Fixing the numerator is urgent. As discussed, this involves actively looking for the bad news contained in the raw mortality records.

But it is equally urgent to fix the denominator. There might be some good news hiding there and we must actively look for it.

Even with South Korea’s active testing policy, it is likely that many positive cases are missing from their stats. What if only one third of cases are detected and the true CFR in South Korea is below 0.5%? What if Italy’s horrible death count is explained by a massive epidemic penetration within its population, with an actual CFR that remains low? If a reasonably durable immunity is acquired after exposure and recovery (which remains to be confirmed), that could mean that the natural peak of the epidemic wouldn’t be that far off.

Whether or not we’ll get good news, we must look for the answer.

Knowing the denominator and its time dynamics is also the only way to properly calibrate other key parameters of the pandemic, such as the propagation coefficient R0, which measures how many secondary infections result from a single infected patient.

The coefficient R0 isn’t a constant. It depends on many factors, such as cultural habits, urban geography and containment levels. The epidemic will grow exponentially as long as R0>1. Confinement aims at enforcing R0<1. How effective are the different flavours of containments, in different countries, different regions, different cities, at different moments? How much time does it take to assess that R0 is really below 1? And if it is substantially below 1, does that mean that we can relax the confinement? By how much?

Unless we wait for a cure or a vaccine, the path out of confinement will require tight calibration of R0 and/or certitudes on where we stand on a potential trajectory toward herd immunity.

Right now we’re in the dark for the core metrics that will define when and how the current confinement will end.

How can we better track the dynamics of the total number of cases?

Scaling testing is clearly part of the solution. However, unless there is a plan to test everyone, we have to be careful of selection bias in how testing is done, especially as the selection bias may not be constant over time and regions. As more testing becomes available, selection biases will evolve, and it will become harder to understand the real dynamics.

One approach to prevent selection bias would be test random samples of the population, on a daily basis. This isn’t very practical to execute at scale, especially at a time when people are under confinement.

Self-diagnosis offers more scalable approaches. Covid-19 has a number of symptoms, some of which are distinctive and easy to recognize.

Anosmia — the scientific name for losing the sense of smell—seems quite prevalent among positive patients. While people googling “anosmia” may just have read the recent press coverage on this, people googling “I can’t smell anything” are likely to experience the actual symptom. The following Google Trends analysis of search queries in the US shows a sharp increase from Mar 8–14:

This sharp signal-to-noise ratio suggests that anosmia is a specific symptom: wherever the pandemic is active, the majority of anosmia cases are likely to be caused by Covid-19.

Only a fraction of Covid-19 patients will experience anosmia. But if you know this proportion, you can recover an estimate for the number of people with Covid-19 from the number of people with anosmia. Assuming that the proportion is stable over time, a large scale day-to-day self-diagnosis randomized survey would provide a cheap and scalable way to track the pandemic.

[Apr 6 EDIT: my recommendation isn’t to use Google Trends to estimate the spread of the pandemic. Google Trends can detect hotspots but it is biased and cannot be an accurate epidemiology tool. Large scale day-to-day self-diagnosis randomized surveys are statistically rigorous tools that can provide unique insights of core epidemiology parameters such as R0. This is detailed in a follow-up article which also provides practical implementation details.]

This is just an idea. Of course there are challenges: the virus may mutate, it may have different symptoms across different populations. Yet it is easy to deploy and can certainly be refined to be made more complete and robust.

What matters is to build protocols that are unbiased and consistent over time. We must immediately start collecting such data, as widely and as deeply as possible. Otherwise we will never be able to understand the true geographic and demographic dynamics of the pandemic.

This isn’t about optimism or pessimism, this is about learning the most basic facts about our new reality

How many people will die? How long will the confinement last? How deep will the economic damage be? Will we all lose our jobs? Will my friends and my family be harmed? Will I be harmed? What will ‘the world after’ look like?

In just a couple of weeks, these questions have become day-to-day preoccupations for billions of people. Never ever had so many people faced so much uncertainty, on so many dimensions, so deeply and so abruptly.

In several conversations over the past few days, I’ve been asked whether I was optimistic or pessimistic. What do I think? What is going to happen next?

One thing is sure: optimism and pessimism are equally irrelevant today. We just don’t know enough.

The battle is raging and there is no time for posturing. Nobody cares about whether you think it’s ‘just the flu’ or ‘zombie apocalypse.’

Unless we quickly learn the most basic facts about our new reality, the only certainty is that we’ll make bad decisions that will inflict unnecessary harm on a lot of people.

Follow-up articles are now available:

--

--

David Bessis

Mathematician & tech founder. CEO of Tinyclues. Proved theorems in algebra & geometry. Helping marketers learn from their customer data.