There is a lot of junk out there
You’ve probably heard that antibody testing is key to reopening businesses and schools. Individuals who test positive are likely to be immune to COVID infection, and can safely return to work. Cities and counties with high positive levels are similarly protected and less vulnerable to a second outbreak of the disease.
You probably also learned a new bit of jargon, that tests which detect anti-COVID antibodies in the blood are called “serology” tests. And it’s likely that you’ve read that many of these tests are junk, particularly the point-of-care rapid tests. What you haven’t learned is what separates good tests from bad ones and how to tell the difference. That’s because most health-care reporters call doctors for insight into health-care issues, which usually makes sense. But few doctors know anything about how diagnostic tests actually work. That’s not really their job.
But developing diagnostic tests has been my job in the past, and now it is my job to explain how they work — and how they don’t.
Not all errors are created equal
Few tests are 100% accurate. Some level of test error is to be expected. Obviously, fewer errors are better than more errors. But the type of error is also important.
Categorical tests (ones that give yes/no answers) have two types of error: false-positives and false-negatives. A test which says someone has anti-COVID antibodies when they don’t is returning a false-positive result. A test which says someone has no antibodies when they do is returning a false-negative. Sensitive tests have few false-negatives. Specific tests have few false-positives.
Sensitivity and specificity are a bit like a teeter-totter: you push one up and the other goes down. Test developers have to decide where to split the difference. You have to decide if the choice they made is appropriate for your testing goals.
If you want to know whether it is safe (or rather, safer) to go back to work or re-open a business or a city, you want a high-specificity test (few false-positives). You don’t want people to think they are immune when they are not.
If you are running a vaccine trial, you want a high-sensitivity test (few false-negatives). You don’t want to vaccinate people who are already immune. This will make your vaccine look more effective than it really is (and also puts people at risk for a vaccine-induced immune over-reaction).
Speed is the enemy of accuracy
Market research is a bit of a joke to diagnostic test developers. Any market survey will tell you that what customers want is a test that is 100% accurate, gives instant results, and costs nothing. Good tests deliver one of those, excellent tests deliver two, and no tests deliver three.
Just as you have to balance the trade-off between sensitivity and specificity, you will have to balance speed vs accuracy (let’s leave cost out of the discussion for now).
The gold standard for serology testing is the ELISA test, shown in the schematic below.
The advantages of ELISAs are that they can be very sensitive and specific, and they can give quantitative results. They can test one sample for several different parameters, such as antibody type, and for reaction against different COVID strains.
But ELISAs are slow and cumbersome. They usually require a blood draw, rather than a fingerstick, and must be run in a lab. Although they can be automated and multiplexed, it usually takes 4+ hours to get a result. We don’t have enough ELISA workstations and lab techs to run all the tests we need to reopen the country. Not even close.
That leaves the job to rapid tests. There are many rapid test architectures, but the most mature and well-developed is the Lateral Flow Immunoassay. These are sometimes called dipstick tests, and you are probably familiar with them as Drug Of Abuse tests or home pregnancy tests. Their architecture is sketched below.
LFIs are simple and fast. You put a drop or two of blood on a port, set it aside for 10–30 minutes and then look in the read window for a bar which signals a positive result (or its absence for a negative result). They cost about a dollar to make, can be made by the millions, and can be used by pretty much anyone, anywhere.
All that speed and convenience and cheapness comes with issues.The big issues with LFIs are setting the cutoff, and the precision around that cutoff. In diagnostics, “cutoff” is defined as the level of analyte (the thing you are testing for) which gives a 50% positive rate (or negative rate, if you are a glass half-empty person).
Controlling the cutoff can be challenging. However high or low it ends up being largely determines your false-positive and false-negative rate. A high-cutoff test will suffer from false-negatives and a low-cutoff test will suffer from false-positives.
What is even harder to control is the imprecision around those cutoffs. The term of art here is the “Zone of Imprecision”. This is the range of analyte concentration that gives anywhere from 5% to 95% positives. Below that range, the test is reliably negative and above that range it is reliably positive.
Zones of imprecision for LFIs typically cover a 10-fold range of analyte. That’s a lot of uncertainty. The usual way of dealing with it is to move the cutoff to the low range of the expected analyte levels (resulting in a high-sensitivity/low-specificity test), or to the high range (resulting in a low-sensitivity/high-specificity test). You will have to pick your poison.
You are on your own for now
Normally the FDA expects infectious disease tests to show at least 95% sensitivity and specificity before giving marketing approval. But these are hardly normal times. Stung by criticism that they delayed approval of PCR tests for active infections, they whipsawed in the other direction and gave emergency approval to some pretty crappy serology tests. That’s not a criticism, just reality — they are in a no-win situation.
The upshot is that you will have to decide what you need from testing: speed, sensitivity or specificity. You are not going to get all three. I hope this summary gives you the tools to make an informed decision. Good luck.
Disclosure: I was the CSO at MicroPhage, Inc, which developed the first rapid test to detect Staph aureus bloodstream infections. I currently consult for companies developing in-vitro diagnostic tests, including some which are developing COVID-related tests.