Alexander MacInnis
Sep 25, 2020 · 10 min read

Why Rushing Release of a Coronavirus Vaccine Could Make Things Worse

Image for post
Image for post

Americans seem to have an intuitive understanding of the safety question. See what “safe” really means.

By Alexander MacInnis

You’ve probably had a lot of conversations like this one.

You: “How are you and your family holding up with the COVID pandemic?”

They: “It stinks. It causes so many problems! I can’t wait for it to be over. As soon as we get a vaccine, it should get much better quickly. I think we’ll have one by the end of the year.”

Me: “If we have one soon, will you take it?”

They: “Heck no! I’m going to wait and see how it affects everyone else.”

Is that overly selfish or an understandable tendency to watch out for yourself and your family first?

Consider the national polls from the Kaiser Family Foundation[1], CBS[2], and ABC News / IPSOS[3]. Kaiser’s poll, published September 10, 2020, found that 54% of Americans would not accept a coronavirus vaccine that was approved before the November election, roughly consistent across all political affiliations. 62% are worried that political pressure may lead to approval of a vaccine without making sure it is both safe and effective. A CBS news poll released September 6 found that 65% of voters say if a vaccine were announced as soon as this year, their first thought would be that it was rushed through without enough testing. Only 21% would get one as soon as possible, 58% would wait to what happens to others, and 21% would never get one. An ABC News / IPSOS poll released September 20 found that 64% would be likely to take a “safe and effective” vaccine. But 69% did not trust President Trump to confirm the safety and effectiveness of a vaccine — 53% had no confidence at all — and 52% did not trust Joe Biden or pharmaceutical companies to do the same either.

Perhaps many people intuitively understand something almost never mentioned in the press, nor even in most vaccine clinical trial protocols, yet it is fundamental to biostatistics. Basically, unless you have enough test subjects and take the time to find any serious adverse events that may occur, you can’t know how safe it is.

Think about what “safe and effective” really means. These are two different things. “Effective” in a clinical trial means that the study found statistical evidence that the vaccine reduced the incidence of COVID compared to a placebo. Thousands of volunteer test subjects are randomly assigned to get either the vaccine or a placebo, 50% to each. Over time some subjects naturally get COVID. If the number of new COVID cases is significantly less in vaccinated subjects than in those who got the placebo, appropriately adjusted for time, that’s a signal that the vaccine is effective. The trials are generally looking for a vaccine effectiveness of 60%.

But, there is no signal for safety. Safety is not an event that happens — it’s a finding that bad events did not happen. Let’s focus on what the trials call serious adverse events (SAEs) and medically attended adverse events (MAAEs), not the usual minor effects you expect from vaccines. Let’s refer to them together as SAEs for brevity. How can you know that a vaccine did not cause SAEs? The published coronavirus vaccine clinical trial protocols (so far, Moderna[4], Pfizer[5], Astra-Zeneca[6], and Johnson & Johnson[7]) are designed to look for and report all adverse events, including SAEs. If they don’t detect any SAEs when they have followed a certain number of subjects for a certain length of time, what exactly does that tell you?

To answer that, we need to use something called statistical power. Statistical power is the probability of detecting an effect if there is one. The coronavirus trials are designed with a specific power to detect vaccine effectiveness, but the first three published protocols make no mention of the power to detect SAEs. The Johnson & Johnson protocol, released September 23, 2020, laudably does have some information on this, although not calling it power.

To determine the power of a trial to find SAEs, you need to assume some rate of SAEs that you are looking for. What would be a rate that you would be concerned about? One in 1,000? 1 in 10,000? 1 in 100,000? 1 in 1,000,000? In the US, the novel coronavirus has already killed over 200,000 people out of a population of 331 million, currently about 1 in 1,626. And that is with only 2.1% of the population having gotten COVID-19 so far, so the number is likely to get much worse. 2.9% of the cases so far have died, which is 1 out of 34. Many more have long term serious health effects of COVID. We expect a vaccine to be much safer than the disease it protects against because we give them to very large numbers of healthy people. What rate of serious adverse effects do you think would be acceptable for a coronavirus vaccine? The answer has major implications to how long it would take to detect such an event rate if it exists — or indeed whether it is possible at all before widespread adoption of the vaccine.

Next, how good a job do we expect a clinical trial to do of finding at least one SAE if the vaccine has some assumed SAE event rate? If the vaccine actually does have whatever SAE rate we assume, then not finding any SAEs would be a false negative. We generally hold clinical trials and other medical studies to a standard of a false positive probability of 5%. A false negative probability of 5% would be a power of 95%. What does it take to get that much power?

Let’s assume that we are interested in a rate of 1 SAE per 10,000 vaccinated subjects. If a vaccine has that rate, we want to know about it before releasing the vaccine to the public, whether via FDA approval or an emergency use authorization. If it has a greater rate, it becomes even more important to know as soon as possible. If it has a lower SAE rate, we are willing to accept some risk that we might not observe any SAEs in the trial. Let’s also assume for now that finding even one SAE in a trial is sufficient to conclude that the vaccine has a safety issue. This may be optimistic; some trial protocols specify looking for at least 5 serious adverse events in the vaccinated group vs. the placebo group.

Here’s where statistics come in. If a vaccine has an SAE rate of 1 in 10,000, which is the same as 0.01%, and you have some number of vaccinated trial subjects, then you can calculate the probability of seeing at least one SAE event. The probability comes from what’s called a binomial distribution. It’s not quite as simple as expecting 1 SAE in 10,000 subjects if the rate is 1 in 10,000; there’s a good chance of not seeing any SAEs. Think of it like rolling a die with 10,000 sides, one of which is a “1”, and you toss it many thousands of times, looking for a single instance of “1”. You can easily calculate all the numbers here yourself, using a free online calculator, such as this one[8]. If the SAE rate is 1 in 10,000 and you have 10,000 vaccinated subjects, the chance of getting at least one SAE is 63.2%; this is the power. The chance of not seeing any SAEs, a false negative, is 36.8%. Is that good enough?

But, you might say, most of the clinical trials published so far plan to have 30,000 subjects. How much does that help? Subjects are randomized 50-50, so with 30,000 subjects, 15,000 are vaccinated. Plugging in 15,000 to the numbers above, the power to observe one SAE goes up to 77.7%. Better; is it good enough?

If you have 60,000 randomized subjects with 30,000 being vaccinated, the power goes up to 95%. That sounds good. Coincidentally, this is what the Johnson & Johnson trial protocol says, too, although it does not use the word “power” for this statistic. See Table 5 in the J&J protocol.

But hold on. These numbers of subjects assume the trials all run to completion before making a decision. Early trial termination, or early approval or authorization, means making a decision before the planned number of subjects get the vaccine and are followed long enough. The only subjects we can count for the power to find an SAE are those who actually got both shots (for the two-shot vaccines, or one for the one-shot J&J vaccine), and we followed them long enough to see any SAEs that might occur. In the published vaccine protocols, subjects are followed for either 2 years or the entire duration of the study to look for SAEs, and 6 months after vaccination to look for medically-attended adverse events. If you want to make a safety decision quickly, clearly, you can’t wait two years for adverse events. It might be reasonable to wait for 28 days, for purposes of making a decision about approval or authorization. The number 28 days is a guess for convenience. We may need to wait much longer to see whether the vaccine induces enhanced respiratory disease (ERD). The published protocols have more information on ERD. Those 28 days, or whatever waiting period we choose, are after the second shot for two-shot vaccines.

Just as important, clinical trials recruit subjects over time. When a trial plans to have, say, 30,000 subjects, not all are enrolled in the study on the first day. It can take weeks or months to recruit, enroll and vaccinate them. And, the better job the manufacturers do of recruiting a diverse set of subjects representative of the population, the longer it takes to recruit them all. Combining the time it takes to recruit subjects into the trial plus the time it takes to give them two (or in one case, one) shot, spaced apart by multiple weeks according to the specific protocol, plus the 28-day wait to see any SAEs, you need to count the number of vaccinated subjects long before the date you want to make a decision. If you want to make an authorization before the November election, the number of vaccinated subjects you can count in testing for SAEs is the number of subjects who were enrolled much earlier, perhaps two months earlier. Very roughly, a trial that planned to have 30,000 subjects might have enrolled perhaps 15,000 subjects, 7500 of whom received the vaccine, in time to look for serious safety issues on such a timeline.

What, then, is the power to find at least one SAE, assuming they would occur at a rate of 1 per 10,000 when we can use 7500 subjects? From the online calculator, the power is only about 53%. That is, if the hypothetical adverse event rate is true, there are only slightly better than even odds of seeing even one SAE before authorization.

Keep in mind, the examples above assume that finding only one serious adverse event is enough to conclude that there is a safety issue with the vaccine being tested. That might not be realistic. If we were to require 2 SAEs before concluding there is a problem, then the power to find them goes way down. For 7500 subjects available to look for SAEs, the power to find 2 SAEs is only 17%! With 15,000 vaccinated subjects and enough time to look for SAEs, the power is 44%. With 30,000 vaccinated subjects, the power is 80%. Better, but still not 95%. If we were to require finding, say, 5 SAEs before concluding there is a problem (see the vaccine protocols), the power to detect becomes close to zero if we assume a rate of 1 in 10,000. But if we assume a much greater SAE rate of 1 in 1,000, then the power is close to 100%. The more common a problem is, the easier it is to find, and vice versa.

Importantly, none of this implies that any of the coronavirus vaccines actually do have any particular rate of SAEs. We cannot know until we do thorough safety testing, with high power.

So — what happens if the FDA approves or authorizes a coronavirus vaccine very quickly, and the power to detect SAEs is low… and only later do we find out that there is a substantial rate of serious adverse events? How would the public react? If a released vaccine actually does have an SAE rate of 1 in 10,000, when 150 million people have had the vaccine, we’d expect about 15,000 people to have serious adverse reactions. About 1,000 people would have serious reactions after only 10 million people had been vaccinated. Then what? Think of the majority of the population who wanted to wait and see how it affects other people — would they conclude it was worth it or not? Even if the actual SAE rate were 1 in 10,000, it would still be lower risk than getting a serious case of COVID. But many people, perhaps a majority of the population, might decide not to accept the vaccine. After that, another safer vaccine might be released. But would the public trust it? Any coronavirus vaccine might have an extremely low SAE rate or none at all. The problem is we can’t know until we study it properly. The different coronavirus vaccines under development have different levels of risk based on their technologies and histories.

If the scenario above were to occur — a substantial number of serious adverse events in the first wave of people to get the vaccine — that would lead to prolonging the pain of the pandemic. The story would go like this: first, rush to approve a vaccine without giving the trials enough time to have a high power to detect potentially serious reactions. Then, many people wait to see how the vaccine affects others. Then if problems start to appear, much of the population may decide not to accept the vaccine. They might not only reject the first vaccine but possibly also any subsequent, better vaccine. All the while, the novel coronavirus rages on, and we are all no better off mid-2021 than we are right now. The right thing to do is take the time to get it right the first time, rather than rushing a coronavirus vaccine.

1. Kaiser Family Foundation Health Tracking Poll — September 2020: Top Issues in 2020 Election, The Role of Misinformation, and Views on A Potential Coronavirus Vaccine

2. Voters skeptical about potential COVID-19 vaccine and say that one this year would be rushed — CBS News poll. September 2, 2020.

3. ABC News / IPSOS poll September 20, 2020

4. Moderna coronavirus study protocol mRNA-1273-P301

5. Pfizer coronavirus study protocol

6. Astra-Zeneca coronavirus study protocol

7. Johnson & Johnson coronavirus vaccine trial protocol.

8. Stat Trek Binomial Probability Calculator.

The Startup

Medium's largest active publication, followed by +773K people. Follow to join our community.

Alexander MacInnis

Written by

Epidemiologist and former engineer. Particular interest in autism. I see epidemiology as the science of seeking true answers to medical questions.

The Startup

Medium's largest active publication, followed by +773K people. Follow to join our community.

Alexander MacInnis

Written by

Epidemiologist and former engineer. Particular interest in autism. I see epidemiology as the science of seeking true answers to medical questions.

The Startup

Medium's largest active publication, followed by +773K people. Follow to join our community.

Medium is an open platform where 170 million readers come to find insightful and dynamic thinking. Here, expert and undiscovered voices alike dive into the heart of any topic and bring new ideas to the surface. Learn more

Follow the writers, publications, and topics that matter to you, and you’ll see them on your homepage and in your inbox. Explore

If you have a story to tell, knowledge to share, or a perspective to offer — welcome home. It’s easy and free to post your thinking on any topic. Write on Medium

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store