The problems of a ‘bad test’ for COVID-19 antibodies

Alex Freeman
WintonCentre
Published in
3 min readApr 10, 2020

Everyone around the world is waiting for a COVID-19 antibody test that can finally allow those who have immunity (have developed antibodies to the virus) to leave lockdown and get on with work — especially key workers on various front lines.

But whilst governments are under intense pressure to ‘get on with it’ and buy test kits, there is a very good reason why ‘a bad test is worse than no test’. It comes down to what happens when a small inaccuracy gets scaled up over testing a lot of people.

A test divides people into those in whom it detects antibodies (who get a ‘positive result’) and those in whom it doesn’t detect antibodies (who get a ‘negative result’).

That means that there are two ways a test can be wrong: it can miss people who actually have the antibodies or it can tell people they have the antibodies when they don’t.

Tests need to state their percentage of right answers on both these: they’re called the sensitivity and the specificity.

So let’s see how these affect the results when testing a large number of people.

Imagine we test 1000 people (the front line staff at a hospital, say), and imagine 5% of them have actually had the virus. Our friends at graphic company Luna9 have helped us put together this graphic:

This is a scenario using realistic figures for the potential specificity and sensitivity, as well as the prevalence of the virus at the moment. You can see how the problem is that even a small number of false positives (2%) in a large number of people (the vast majority who don’t have the virus, on the right hand side of the diagram) means that almost a third of people told they have had the virus, and hence might be safe, actually aren’t.

Of course, testing large numbers of the general population is really important to help us know roughly what proportion of people might already have had the virus, and even tests with these error rates would be able to give us those ballpark figures. But decisions shouldn’t be made for each individual person from their test result because with these levels of sensitivity and specificity it would put a lot of people at risk.

It is also important to note that if the percentage of people who have antibodies is a lot higher than the 5% illustrated here then that also changes the picture.

Fewer people down the right hand side of the diagram and more down the left helps swing the balance in favour of more ‘correct’ results in the group of those told they have the antibodies (and hence potentially leaving lockdown). So, confining testing to those who have already had symptoms and hence are likely to have antibodies would help.

In fact, it’s possible that some wider, sample testing might show that a higher proportion than 5% of the population have the antibodies, putting us in that situation where the false positive rate is less problematic anyway. Until we try it we won’t know. But we must beware taking a ‘bad test’ and using the results to determine who gets to leave lockdown and who doesn’t because the implications could be bigger than they would seem at first glance.

--

--

Alex Freeman
WintonCentre

Winton Centre for Risk and Evidence Communication, University of Cambridge