Is there an intelligible “anti-vaxx” position?

Gautam Tejas Ganeshan
Jun 23 · 69 min read

[UPDATE: This article has been retweeted by the co-founder of Wikipedia, by an ivy league university press, by the scientist with the relevant TED talk, and by the author of a NYT bestselling book on depression. For more on what the response has been in general, see “Updates and Responses”.]

I am not an “anti-vaxxer”. This is *not* my pet issue.

I am not against the promotion of vaccination as one element of a good overall public health policy. I am not against their sensible use, especially considering the variety of circumstances people find themselves in worldwide. I do not think that they “don’t work.”

Do pesticides “work”? Sure — they do what they say they’re gonna do, more or less. Does that, in itself, justify using them? Maybe. It’s a matter of tradeoffs, priorities, and context.

I’m seeing a *lot* of *anti* “anti-vaxx” sentiment lately — particularly online.

I also noticed that recently, in short succession, all of the following prominent online entities publicly changed their policies on “anti-vaxx” content: Amazon, Facebook, YouTube, Twitter, Pinterest, GoFundMe. (As well as more recently the Huffington Post, and even MailChimp.)

They claimed to be “cracking-down” on the spread of harmful misinformation — all within a few weeks of each other, as I recall. What they did was remove, downregulate, or demonetize content; “blacklist” search terms (i.e. block *any* results for such terms as “vaccine”); and even ban books (!). Online opinion on this — and even reporting — tended to be laudatory and untroubled, as far as I could tell, conveying a sense of “Finally…”, and downplaying accusations of censorship or bias.

In trying to figure out what’s going on in all this, I’m struck by certain important angles on the topic that deserve more sober attention than they are getting. So I thought I’d add a few pieces to the puzzle here.

Caustic criticism of the “anti-vaxx” position is usually of a straw man — i.e. an artificially weak version, easy to ridicule, that doesn’t capture what’s really going on.

My sense is that this is either out of unfamiliarity — i.e. ignorance that there even *is* (or *could be*) an intelligible, articulable position (instead of just “misinformation”); or else the vehemence is a deliberate rhetorical move, justified by self-assurance of having the moral high ground. In some cases the result is that folks apparently feel just fine laying down highly mean-spirited comments.

Whereas the presence of vitriol in a discussion does not serve any public health need that I can see…

Are we strong enough to “steel-man” the “anti-vaxx” position, and see what it says at its best, before dismissing it?

Let’s see.

(Note: The first four points here are discussions of principle, roughly speaking, and thereafter the paper switches gears and drills down to consider things in detail.)

1.

Even if you’d prefer there weren’t, or are sure that there *shouldn’t be*, there nonetheless *is* some kind of continuously unfolding public debate here.

Vaccination is in the news because of a seemingly intractable minority of folks who for their own reasons — some spurious & insupportable, others worth understanding — bunk what is widely taken to be the given knowledge of mainstream science and public health policy.

Lately there have been measles outbreaks in the USA, resulting in alarm, widespread censure of what has lately been labeled “vaccine hesitancy” by the WHO, and — perhaps most consequentially — highly contested legislation.

In the signing statement of former California governor Jerry Brown for bill SB 277, which ended personal belief exemptions in that state and was enacted into law in 2015, he writes that the law:

“…has occasioned widespread interest and controversy — with both proponents and opponents expressing their positions with eloquence and sincerity.”

“Eloquence and sincerity”?

Is that true, in your experience?

If so, you might not know it from how the subject is typically discussed online!

I think there’s room for *even more* “eloquence and sincerity”…

But anyway, my first point is that there *is* controversy, clearly — and as I already said, this is true regardless of how sure *you* are that there *shouldn’t be*.

So… Why?

Nothing but a confrontation between ignorance and intelligence?

I wonder whether that amounts to the same thing as saying:

“If only everyone knew as much as I do, there wouldn’t be a debate.”

Hmm.

2.

We might note right out of the gate that simply the removal of an option for the exercise of “personal belief” *in itself” ought to put this issue on the map for civil libertarians. (The ACLU submitted a letter to the CA senate saying that it “regrets to inform you of our concerns re SB 277.”)

It won’t do to compel injections in a free society, any more than to compel sterilization or compel euthanasia, regardless of anybody’s views about public health.

This would clearly constitute “tyranny of the majority.”

If a domain were not specified, what would you say about removing *in general* — and *by legislation* — an option to exercise “personal belief”?

It doesn’t take outlier creativity to come up with an alternative, analogous scenario — one in which *you* would not want to be coerced into receiving an injection yourself (let alone dozens of them), predicated upon a prevailing view held by society, justified and enforced with howsoever much certainty and consensus, but one that for any reason *you* had reason to question or simply be conservative about. (And by the way, consensus is not the driving force behind good science. Neither is certainty, for that matter.)

A right to bodily integrity has got to be up there on the list of essential freedoms — and it cannot be denied that vaccination interrupts the body’s integrity. It pierces the skin.

It is undoubtedly a medical intervention. So it has got to remain voluntary. This is a bright line.

The patient *must* have the right to allow or disallow this (i.e. “informed consent”), or else we become a society that does not treat self-ownership as an inviolable premise, and that does not recognize the individual’s sovereignty over himself or herself.

“My Body, My Choice” — a compelling slogan, some would say, when it comes to a woman’s right to choose not to give birth.

But how about when it comes to a person’s right to choose not to receive injections? In that case nothing but an irresponsible taking matters into one’s own hands?

And so, what instead — “Your body, Our choice”?

Forcing injections is an extremist position.

The moral incontrovertibility of this was definitively established in the Nuremberg code.

3.

Children are obviously a special case.

The family is the appropriate locus of medical decision-making.

Parents and legal guardians are obliged to (and should, I believe, feel honored and humbled to) make consequential decisions for their children. This is not news, and deciding about which medical interventions are appropriate is not essentially different from many other decisions required over the course of a child’s life — in that all of them affect not only the family but also others & the society at large. (Think of behavior, which can veer into antisocial pathology just as surely as the body can be a vector of pathogenesis.)

To claim that intervention by the state — or even by private medical professionals — should take precedence over the family’s natural jurisdiction here is a dramatic and worrisome proposition.

It is relevant to consider that family social workers typically do everything they can to keep children in their own homes, to encourage kinship care, to support family preservation, and in general to value “family-like settings” — all in the knowledge that foster care by the state is *not* preferable to these, but is an option of last resort.

I do not doubt that the majority of people who choose not to vaccinate themselves or their children do it in good faith and for reasons that are compelling to themselves. It is not easy to make decisions regarding the health of one’s child. The stakes are high, and as a parent, one is constantly put in mind of one’s fallibility.

Consequently, I am suspicious of blanket aspersions directed at *all* such people — that they are *all* willfully ignorant, stubborn against science, “holier-than-thou.”

Some are, surely. They’re the straw men — but they’re real enough, as a minority of the minority. They’re indeed highly ignorant — and they produce regrettable, easily mockable statements in support of their resistance.

What about the rest?

Maybe they’re refraining from comment, cowed into silence because of high ambient hatred — as a consequence of, and in proportion to, the presumptive conviction that their “personal beliefs” are retrograde, dangerous, low-resolution, and no better than the worst examples. Many simply have no vocation for battling this, and prefer to keep their heads down, for the sake of their families. The issue inevitably involves their children.

And so, few are pushing back against the groupthink and ad hominem — which are quite besides the point of any science or policy question. There is a growing, uncontested virtue-signaling “circlejerk” (in the vernacular…) — i.e. “I’m so upstanding because I’m willing to post that I think people who willfully endanger children are stupid & backwards.”

The points being scored here are *too easy*, and the temptation to score them — the more vehemently the better — is leading to a downward spiral of low-quality conversation, abetted by a culture of self-certainty and accusation. That is — many people are self-censoring on this topic, depriving others of the possibility of productive dialectic. With no one intelligently representing the alternative view, a chasm is opened, into which inferior versions (straw men — real but not representative) of the position are providing cannon fodder for online mobs of ideologues.

It is made to seem as if these parents assume that their families are simply above disease, which, if it were true, would be an indictable position, no doubt.

Certainly among these parents many are misinformed. No doubt.

But not everyone is marching in lockstep with any prominent and self-appointed spokespeople for a “movement” against vaccinations, and accordingly not everyone should be tarred with the same brush nor associated with fringe claims.

Not everyone who chooses not to vaccinate is an “anti-vaxx” activist.

Most are well-intentioned parents who see themselves as unremarkable, and as totally undeserving of accusations of neglect or abuse.

In California, where SB 277 was passed limiting exemptions on the basis of personal belief, the penal code nonetheless provides that: “An informed and appropriate medical decision made by parent or guardian after consultation with a physician or physicians who have examined the minor does not constitute neglect.” (Section 11165.2)

Note that the penal code is distinct from the health and safety code, where the changes from SB 277 took effect, these changes concerning conditions for admission to school, *not* conditions for violating criminal law. As observed earlier, crossing the line to determine that declining an injection is penalizable *as such* would be unprecedented in a free society. Hence the law “does not apply to a pupil in a home-based private school” — a necessary corollary. Although, what the ACLU expressed “concerns” about was “equal access to education” not being “limited or denied” on the basis of medical status.

Previously, presumably in recognition *both* of creed (“beliefs”) not being a reason for the denial of “access”, and that “informed consent”, like all consent, implies a right *not* to consent (or else any “consent” given is vacuous) — until 2015, the code explicitly stated (rather than leaving unaddressed): “Immunization of a person shall not be required for admission to a school … if the parent or guardian or adult … files with the governing authority a letter or affidavit … on the basis that they are contrary to his or her beliefs.”

Regarding beliefs of any kind — by now, it ought to be a commonplace that extremist versions can be such per-versions of their own mainstreams as to be unrecognizable even to people who would outwardly seem to be in the same camp.

A more charitable version of the same accusation is that people are being misled by phony science and demagoguery. From where I sit, these are *way* overstated as causal factors. I think they account for a proportion of the phenomenon — but are *not* responsible for the central thrust of it.

Obviously I can’t speak for every crazy facebook post, many of which are totally indefensible.

But if you think that the *main thing* going on is scientifically illiterate people being duped — then, I submit that what’s *really* going on is that *you* have not managed to encounter the position intelligently stated.

Don’t think that’s possible?

Think it’s nothing but idiots all the way down?

I disagree. And the principle of charity does too, suggesting instead that we try a little harder to figure out what’s going on. Even an interest in human psychology would seem to move us in the direction of curious engagement, and not just contempt. Are we really content to say that the parsimonious explanation for why a lot of people disagree about something seemingly settled, and quite fundamental, is that it’s nothing but a widespread delusion? We get it, and they don’t, and that’s that? That seems a bit too pat, too comfortable. At the very least, adopting such an attitude ought not be a source of pride, in my view.

A *very* recent article in JAMA (Jun. 2019) asks for “civility in vaccine policy discourse” and asserts that:

“Civil skepticism in public discussions about vaccine policy can lead to productive discussion. The science of vaccinology, like all science, has uncertainties; applying science in policy entails value judgments, and people can disagree on the implications of scientific evidence. Skepticism reminds all individuals that intellectual humility is important and reinforces the value of democratic debate and transparent procedure.”

It ought to give would-be-accusers pause that a substantial proportion of folks who don’t vaccinate are well-educated. For this contingent, at least, “anti-science” is not the name of the game — quite the contrary.

There seems to be a particularly wide gulf of understanding on this point, leading to raised hackles — the same hackles that get raised for so-called climate-change-deniers and evolution-deniers. Here’s the thing — there are intelligent versions of those conversations too! But they are equally verboten, beyond the pale, etc.

Obviously the sides are often talking past each other in all of these cases, so that some of what is categorized as “denial” is in fact rather only questioning *priorities* and *context* — regarding both of which, differences of perspective can be fruitful and lead to better decisions. Undoubtedly we need that questioning in every domain of public conversation. Short of that, we risk inflating our own presumptions directly into beliefs and actions that bump around unwieldily, uncorrected and uncorrectable, like rigid dirigibles toppling the pieces of the board game.

Anyway, many of the well-meaning laypeople who express bewilderment at the supposed scientific backwardness of “anti-vaxxers,” when pressed to enunciate the source of their own convictions, simply drag out the shibboleths of the medical establishment, as if that were enough, as if they themselves were able representatives of scientific opinion, and as if adducing facts and figures as interpreted by themselves were a debate-ending bolt of insight. (Tu quoque? Mea culpa.)

Sometimes it happens that they are even less well-informed than those they accuse.

What’s the result?

Appeal to authority, as often as not (a fallacy, need I remind you…), and failing the rhetorical effectiveness of even that, a recourse to vehemence.

The best defense is a good offense, I suppose?

Once again, good science does not proceed by consensus, certainty, or authority, much less by indignant accusation.

4.

A word about the possibility of health “experts.”

Of all concerns requiring consideration on a human scale (as opposed to, say, an industrial scale), health of the human body tops the list.

Addressing problems of health by recourse to specialization is a double-edged sword — on the one hand, the marvels of modern medicine, and on the other hand, its disturbing failures.

If the goal is health, why is the food in hospitals notoriously bad?

This is cited only as a signature example.

Ceding jurisdiction over the body to “experts”, while enabling feats of their expertise, *also* dangerously subjects the matter to grave distortions, and offenses against common sense. Here we have the danger of, in Neil Postman’s words, the “specialist as ignoramus”.

In agriculture (by way of non-sequitur comparison), these same distortions have often yielded official recommendations that devastate rural communities, and in economics they have resulted in the dubious allowance of banks that can be “too big to fail.” Both of these problems of scale have been sanctioned by “experts” with remote control.

As they do elsewhere, these excesses often appear in the healthcare industry as “scientism” — i.e. science’s own carefully modest claims being exaggerated through hubris, or through lack of any other mode of justifying medical decisions — that is, taking scientific findings to stand for injunctions, or otherwise overstating their scope.

It is apparent that there are also commercial and political pressures at work in medicine and among its regulators — and unfortunately these mean that much of what “studies show” is tainted by being to some extent paid propaganda. Just look at the history of about-faces among official medical recommendations — nutrition, for example. Food pyramid? Grain lobby. (TIME Mag. 2016)

Furthermore, there are particular domains of care where modern medicine shows its inadequacy especially. Some of these come to light at the beginning and end of life, where a culture of fearing pain in childbirth has created an epidemic of overmedicated deliveries, and where the old are suffered to undergo heroic efforts at extending their lives at the cost of their dignity. Here there is too much intervention as a result of what many have come to recognize as an overemphasized disallowance of discomfort.

This preamble about “healthcare” in general sets the stage for some of my more substantial commentary on vaccines in particular.

Here we go:

5.

An otherwise healthy person is likely to survive measles with lifelong immunity.

The vaccine does not offer lifelong immunity — rather, receiving it entails a schedule of dependence on boosters throughout life.

The risk of mortality from measles — *upon symptomatic infection* — is 0.2%.

And if you want to tip the scales, nutrition helps a lot, especially vitamin A.

So, for 499 out of 500 people, the logic of vaccinating (considered on the individual level alone, for now) trades off in favor of the intervention-justifying philosophy articulated above, as against the desire for maximizing immunity in-itself. Because in fact the immunity conferred by the vaccine is not of the same quality as that conferred by going through the ailment, by the measure of duration at least, if not in other ways as well. It is certainly plausible that natural immunity from the “real” illness has other points of advantage (so to speak) — besides only lasting longer — in comparison with vaccine-induced immunity. (Some specific claims of this sort are explored later.)

And the justifying philosophy, again, is that of interpreting all sickness as suffering to be avoided by medical procedure, instead of as the temporary experience of an organism designed to be mostly resilient when cared for properly, though also ultimately mortal and capable of suffering, to be sure. (And by “mostly”, I mean 499 out of 500.) (Other health consequences of measles — such as pneumonia & encephalitis — are considered below.)

Meanwhile, even on its own terms (or perhaps *especially* on its own terms), there seem to be a set of intractable, chronic conditions for which modern medicine has no cure nor even satisfactory explanation — allergies, asthma, autoimmune disorders, let alone ADHD and autism.

I will not enter the fray regarding any alleged connection between vaccines and autism except to say that debunking that single concern does almost nothing to change the view of a well-informed person who is evaluating the risks of vaccinating.

Not only is this evident as a matter of social fact …

(You’ve noticed that, have you? Have you explained it to yourself satisfactorily? Is your explanation something other than “Whereas what I am is *sure*, what the other guy is is *stubborn*”?)

… but it also stands to reason, given the actual complex views of sincere people who are concerned about vaccines, and not the version that has been reduced to comical single-issue simplicity and obvious error.

Some (certainly not all) “anti-vaxxers” don’t budge when you lay out your facts *not* because they weren’t aware of them, but rather because — unbeknownst to you — your facts *don’t* in fact contradict their rationale.

Rather, your facts (for instance “Vaccines don’t cause autism”) contradict *your straw man of their rationale* (which might be, in your mind, for example “I’m against vaccines because they cause autism”).

So you think that’s that, game over, worldview punctured — but when it doesn’t in fact “change their view”, then what? Well, at that point at least you’ve gotta admit that there’s more going on than *you* accounted for!

For now, in the big picture, I take it that there will always be some folks who buck the mainstream of health policy, for whatever their reasons may be, and history shows that occasionally they are right (margarine, anyone?).

A reasonable public health strategy needs to be able to accommodate them.

(Tonsillectomy is another example of a recommendation that has been subject to fluctuating official confidence over time, as to when it is indicated, and as to its long term health benefits — which were updated in 2019 to “unclear” by the American Academy of Otolaryngology (ENT) in children with fewer than 7 recurrent throat infections within the past year, with “watchful waiting” now recommended instead. For many, circumcision would fit a similar bill — a medical procedure for which clinical justification increasingly appears to be the rationalization of a decision made for essentially non-medical reasons.)

6.

Regarding “herd immunity” — the basic ethical position is indisputable: our actions are not only about ourselves. (even though our jurisdiction to act does end there, when it comes to who calls the shots, so to speak.) We are certainly obliged to act at least considerately of others, and hopefully also generously when we can.

So, the question then shifts to inquiring what method best conveys herd immunity. I am struck by the following, which is paraphrased from the official recommendation of an American city recently confronted with a (minor) measles outbreak:

“We recommend all healthy individuals receive measles vaccine boosters, except those born before 1957.”

Why?

Here’s the CDC:

“The majority of people born before 1957 are likely to have been infected naturally and therefore are presumed to be protected against measles, mumps, and rubella.”

Hmm.

What is the bearing of this sentence on a consideration of what *truly* produces herd immunity?

Furthermore, why do I say “(minor)”?

Regarding the recent measles outbreaks in the USA:

Without denying that experiencing the disease itself takes a toll, have we stopped to assess what were the negative sequelae in terms of morbidity and mortality, from these outbreaks?

For example, let’s take the famous Disneyland outbreak in 2014. The CDC reports that that resulted in a total of 147 cases — and *no* deaths. They don’t report on complications, but by the background rate of pneumonia (for instance) from measles infection at 6%, we should expect maybe 10 people to have experienced pneumonia — which, as a complication of measles, is responsible for the *only* (single) death by measles in the USA in the past decade. This is current as of July 2019, and includes all subsequent outbreaks to date (CDC). The death was in 2015, prior to which the previous confirmed measles death was in 2003. And, towards a case history: “The woman’s measles was undetected and confirmed only through an autopsy, according to the Washington State Department of Health…She was at the hospital at the same time as a patient who later developed a rash and was diagnosed with measles.” (USA Today, 2015)

So, outbreak? Yes. Dangerous? *Yes*, but to a quite moderate extent overall — perhaps not quite meriting panic. And by the way, were some vaccinated people infected anyway? Yes. So, did their vaccinations “work”? Sorta — maybe the disease was attenuated. Or maybe they forgot about their boosters — which, if they were born after 1957, they are now purportedly dependent on.

And, by the way, if you go back not 20 but 60 years, i.e. prior to widespread vaccination in the USA, mortality from measles was roughly 1/1000 cases (2X more favorable than the concurrent worldwide average of 1/500 — why that difference in worldwide outcomes? what factor explains it? wealth? or? more on this later.), resulting in 500 deaths annually from 500K cases. A source of real harm for many, and a public health challenge to respond to. But a crisis?

By comparison, 70,000 people *died* from drug overdose in 2017 in the USA (out of how many *cases* of addiction? and, with what role played by overprescription — that is, iatrogenesis?), 45,000 by suicide, and 35,000 in car accidents. And none from measles. And if there were *no* measles vaccine as of 2017, the number would be presumably be around 1000, accounting for a rough doubling in population since 1957.

For another point of reference — during the year 2016, the CDC reported 86 cases of measles in the USA, of which 26% occurred in vaccinated individuals. Hmm. Did their vaccinations “work”? Again, maybe they were behind on their boosters. As some people will be, inevitably. Whereas the threshold for “herd immunity” from vaccination for measles is 95%+.

One wonders with just what tools of enforcement such a level of compliance is expected to be maintained, and in what other domains of policy, success against a dimension of societal fragility is based upon 95% compliance. Certainly fewer than 95% obey the speed limit, and fewer than 95% pay all of their taxes — despite both of these being *illegal*.

Whereas to make *illegal* the denying of a medical procedure would be an abuse of power, as we have noted — that is to say, to make it illegal to decide to persist in your default biological state without agreeing to a body modification. And, being as it thus has to remain a recommendation rather than a mandate, let us relevantly observe that *way* fewer than 95% follow any of the government’s *other* health guidelines…

Whatever the reason, it should not be surprising that one possible upshot of a statistic like this — that of 86 measles cases 26% occurred in vaccinated individuals — might understandably be to produce, in the minds of many, uncertainty regarding the correlation between vaccination status and actual immunity — if in fact, among a rather low annual incidence of disease, fully a quarter of the folks who got it were vaccinated anyway.

The salience of this statistic survives in alternate ways of framing it, by the way — for example by segmenting the population by vaccination status, and expressing those affected as a proportion of their relative segment. This would show that a lower proportion of vaccinated individuals were affected, which can be construed as a fact in support of vaccination.

However, no such recontextualization alters the observation that vaccination status does not definitively determine immunity, and not just at the statistical margins. In fact, one reason offered in support of very high rates of vaccination is precisely that vaccines are *not* invariably effective — in fact they have predictable *rates* of being effective (which are, by definition, the inverse of their rates of being *ineffective*), and which are difficult to predict at the level of the individual, particularly over time (necessitating “boosters”), and are part of why there is a need to initially be given shots in a series to “provide another opportunity to develop immunity” (CDC “General Best Practice Guidelines for Immunization”). Taken together, these mean that to retain levels of vaccine-mediated individual immunity sufficient to conduce to vaccine-mediated herd immunity, a society has to in effect “overshoot” what the base percentage would be under (unrealizable) conditions of certainty regarding each individual’s immune status.

Consider: In 2019, there was a mumps outbreak at Temple University, where “the vast majority of students involved had been immunized” (NPR), but “the vaccine’s power can wane over time.” There have been similar such outbreaks in recent years at Harvard, as well as routinely in prisons.

Hmm — the vaccine’s “power” “waned” already by college age?

Regarding whooping cough — a “large new study” by Kaiser Permanente (June 2019) states that “waning effectiveness between doses was a significant contributor to recent outbreaks” and that “most pertussis cases were in fully vaccinated children.”

Again — “waned” already in *children*?

A chickenpox outbreak in an elementary school in 2001 was the subject of a study by the American Academy of Pediatrics — a school at which 97% of the students *without a prior history of chickenpox* were vaccinated. The study asserts in conclusion that: “Students vaccinated >5 years before the outbreak were at risk for breakthrough disease. Booster vaccination may deserve additional consideration.”

But this was an *elementary* school. And the effectiveness of their childhood vaccines had *already* diminished? Surely a vaccine given in infancy that fails by the time of childhood to prevent what is well-known as a childhood illness can be called into question.

Instead, what is implied by the AAP is that it is now considering recommending booster vaccination *every five years* to prevent “breakthrough disease”.

If this is what is meant by “herd immunity”, it seems rather short-lived and susceptible.

Note: The outbreak affected *none* of the students who had had “wild” chickenpox, who were roughly 200 of the 400+ students at the school.

A more troubling, and more rare, complication of measles is encephalitis. Evidently it occurs in 1/10,000 to 1/300,000 cases. Here’s the CDC again: “While rare, this disorder almost always happens in patients with weakened immune systems… In one case, the measles vaccine strain was identified as the cause.”

Once again, this 1/n figure, however small, is not *simply* a matter of chance, although chance plays a role (a genetic component to the strength of your immune system, for example). And because chance plays a role, let nobody give themselves full credit for their good health. But many other things *also* play a role — other than chance & luck — and can help dispose one to be in the large ((n-1)/n) rather than in the small 1/n.

There is another important ethical question at play here, brought to light in this “In one case, the measles vaccine strain was identified as the cause.”

And that is: suppose there is a background risk, let’s say the risk of disease exposure, symptomatic infection, and sequelae from *not* vaccinating. And this is as against a positive risk from vaccinating — of adverse effects, at whatever rate and severity.

In addition to the “numerate” questions of risk assessment, there is also the underlying ethical question of *committing* positive harm (or the risk of harm) versus *continuing*, without action, to be exposed to harm (or the risk of harm) by “omission”, so to speak.

It is worth observing that medical errors are the 3rd leading cause of death in the USA. This was established in 2016 by an expansive meta-study by the Johns Hopkins University School of Medicine, prompting an open letter in May of that year to the director of the CDC, urging among other things an update of the industry-standard death-certificate form adopted in 1949, which did not allow reporting medical error as the cause of death (which it was for 9.7% of all deaths in the nation in 2013. A national survey by the Kaiser Family Foundation found that “Americans continue to underestimate the number of people who die each year in hospitals from preventable medical errors.”)

The harm here is “iatrogenic”: caused by doctors, in apparent direct contradiction of the Hippocratic oath.

This is an ethical problem worth considering, and I won’t consider it in depth here, beyond emphasizing that it is *not the same* as the question of weighing the risks by examining differential rates, etc. (For more on this, readers may consult Ivan Illich’s differentiation of “clinical iatrogenesis” — i.e. direct harm to individuals — from “social iatrogenesis” — i.e. cost to society of inefficient systems, overmedicalization, politicization of healthcare, etc. — and finally “cultural iatrogenesis” — a large, subtle critique, not easily summarized here.)

For our purposes, an example of the latter (examining differential rates, etc.) would be that 1/10,000,000 contract tetanus annually, versus 1/100,000 who contract brachial neuritis from the tetanus vaccine.

(From the CDC “Pink Book”, 13th Ed., 2015 — a.k.a. “Epidemiology and Prevention of Vaccine-Preventable Diseases”: “Between 18 and 37 cases of tetanus were reported annually in the United States between 2009 and 2012 (average 29 cases per year).” [35 / 350M = 1 / 10M] “…the available evidence favors a causal relationship between tetanus toxoid and brachial neuritis in the 1 month after immunization at a rate of 0.5 to 1 case per 100,000 toxoid recipients.”)

The numerical burden thus placed on the vaccine to prevent cases of tetanus annually, in a population of 350 million like the USA, is 3500 — that is, in order for the prophylactic benefit of the vaccine to *break even* with its iatrogenic harm, let alone surpass it by a noteworthy margin.

However, the number of tetanus cases in the USA has never been anywhere near 3500 — and had been declining for 40 years *before* the introduction of widespread vaccination.

The number of cases in the first year of of tetanus being a nationally reportable disease — 1947 — was between 500–600, had been falling since 1900, and continued falling afterwards. That year (1947) marked the beginning of widespread vaccination.

So, accounting again for a rough doubling of population, and under the implausible assumption that vaccination accounts for *all* of the reduction in incidence *since 1947* (despite accounting for *none* of it before ) — under these assumptions, and supposing there had never been *any* vaccination for tetanus ’til date: we would face an incidence of tetanus in the USA in 2019 of perhaps up to 1200. (Confounding factors in the case of this disease in particular suggest that this is a gross overestimate — considering for one thing dramatic urbanization over the same period, versus the habitat of the bacterium Clostridium tetani in manure-rich soil, so that city-dwellers face reduced risks of exposure.)

In any case — there are nowhere near 3500 potential tetanus cases annually for the vaccine to prevent today. And there never have been. So, how is its prophylactic potential justified against its iatrogenic creation of risk — at much higher rates?

Of course, if forced to choose, one may well choose brachial neuritis over tetanus (as the lesser of two evils, perhaps), in which case even “breaking even” as described here would confer a public health advantage — i.e. “replacing” one with the other. But there would certainly still be the adequate basis for an individual hesitation regarding receiving the vaccine, i.e. “How likely am I to be one of the theoretical [and counterfactually overstated] 3500 who gets pierced by a rose thorn, or undergoes a non-sterile surgery, and am saved from tetanus through having been vaccinated? The answer seems to be 0.00001 [3.5K / 350M]. And, even as it is thus unlikely, what further steps could I take — like cleaning dirty wounds promptly and well, and taking some measures to avoid them, including avoiding intravenous drug use — to dispose myself against severe symptoms, steps that have *no* positive risk of brachial neuritis, as opposed to voluntarily adopting a risk of 1/100,000?”

My point is that this is an intelligible cost/benefit analysis, rather than nothing but reactionary recalcitrance. (Tetanus, by the way, is not communicable — so there is no relevant consideration here of contributing to “herd immunity”.)

It is also worth noting here that the “omission” side is actually not quite “not-doing,” as opposed to a “positive” side of “commission” — but rather, it amounts to its own positive approach — a recognition of, deference to, and reliance on complex, distributed, evolved, biological processes — such as immunity is. The immune system is no joke — and it may do it a disservice to summarize it as amounting to a defense army, more or less, amenable to being made more or less capable by simple, predictable tweaking. There might be much more going on than meets the eye — or has met the eye so far. This immune system is the thing in you that recognizes alterity, proactively maintains the self, and has to remain well-integrated with homeostatic function (and even “homeorhetic” function — that is, the maintenance not only of a status, but of a trajectory of development). It seems to me we understand a lot about this, but not as much as we *don’t* understand (cf. “intellectual humility is important,” quoted above from the Journal of the American Medical Association.) We’re certainly *not* on top of auto-immune dysfunction, for one thing, and it’s plausible to me in that case that what is at fault might be something paradigmatic.

For instance, consider the “old friends hypothesis,” which says that we have co-evolved with our environments, relying on them to supply our microbiomes from without — not only with required “probiotics”, but even with salutary pathogenic challenges that serve to calibrate our immune system, especially early in life. In other words, just as neoteny (a long childhood) has been to our adaptive advantage, in that it allows us to engage in extensive learning and acculturation, thus pulling behaviors out of hardware and into software — in much the same way, our immune system might have “outsourced” its programming to its environment, and we come primed to “learn” how to assert ourselves with reference to the particular demands of our time and place.

Now obviously vaccination is an attempt to reverse-engineer, and piggy-back on, this very process. And, it is no doubt to some extent successful at this — that is, to some extent it “works” in mimicking the natural creation of specific immunity. Although, not as durably, as we have seen. But even so, if such mimicking does indeed reduce death, if it reduces suffering, even if it reduces discomfort, then there is still much on the scale in its favor.

But on the other hand, by the same token, the process it piggy-backs on is much more multi-variate than simply the knocking off the list of certain particular diseases as threats — that is, it is more than the creation of a set of particular immunities to a set of particular pathogens. It is instead the field of an entire agent/arena relation, with a deep evolutionary history and complexity.

So then the question is — to what extent is our intervention in this arena disruptive of the process as a whole? To what extent do we trust ourselves to understand ourselves well enough to engineer ourselves? Are we sure we know what we’re doing?

I affirm that we certainly know something rather than nothing, and consequently that some action is justified, rather than no action.

But what is the threshhold of such knowledge beyond which we can be *certain* of avoiding the “cobra effect” — i.e. the law of unintended consequences? That is to say, at what stage does the “precautionary principle” *no longer* apply? (A principle which has, by the way, been elevated to law in the EU.)

In comparison to intervening to create immunity, do we consider ourselves capable of intervening to create old growth forests? Or is it more a matter of getting out of the way? What about soil health? Our attempts to boost the latter by the application of feats of the laboratory has had mixed results — cf. debate about the “green revolution” (it reduced starvation, but depleted fertility, introduced toxicity, indebted small-scale farmers to large-scale agribusiness, and arguably contributed to a very much ongoing epidemic of suicides by farmers in debt in India, which they typically carry out by ingesting the very pesticides for which they are indebted.)

For an example of unintended consequences closer to our topic, consider the much maligned “adjuvants” in vaccines — these are the toxins in the vial that “anti-vaxxers” love to hate, like aluminum. Well, it *is* toxic, so why not just take it out? The reason is that its very toxicity is its function. It irritates the immune system into mounting a response where it “shouldn’t” — that is, a response to the otherwise harmless, inactivated pathogen. That’s how vaccines work 101.

The question is whether in so doing it sweeps up anything else into the immune system’s sights that shouldn’t be — anything *else* that might be unusually present in the blood stream at the time, anything else harmless and not deserving of an immune response on its own, but which the immune system may be “tricked” into labeling as a threat in the presence of the adjuvant — something you ate, or some uncommon bodily compound presenting at higher levels by coincidence, say as a result of inflammation or injury. Here is an “unintended consequence” — an unnecessarily triggered “allergy” or “autoimmunity” — and a reasonable question is how a dramatic increase in the frequency of *occasions* of inoculation (such as there has been) may relate to the likelihood of such coincidences. The relationship may not be linear, but cumulative — you may end up “catching” a few more unnecessary things into the “irritants” category each time, increasing the chances that in “learning” to fend off this and that dead virus you simultaneously happen to “learn” to fend off something you’d really rather not — like your own cartilage, pancreas, or peanuts.

“Type 1a diabetes develops because the body mistakenly identifies insulin-producing cells [in the pancreas] as being foreign, or ‘non-self’ … What starts the autoimmune destruction is unknown, but it may be due to environmental factors.” (UCSF Med. Center, 2019) This is “juvenile diabetes.” It is “non-obesity-related”, and its incidence increased “significantly” by about 2% per year between 2002–2012. (New England Journal of Medicine, 2017)

My point is not to attempt to do full justice to these phenomena, obviously. (For that, look for a TED talk called “How vaccines train the immune system in ways no one expected.”)

My point is, rather — have you considered this particular point before?

Some have, some haven’t, surely.

So, next question:

Is considering it, or expressing it, or being introduced to it, something that is “misinformation”, to be “cracked-down” on by big tech companies, for the sake of public health — and the cracking-down to be celebrated? Or is it rather simply the good-faith sharing of a framework of analysis, the banning of which looks troublingly close to preconceived bias? What criteria would a panel at Pinterest need to meet in order to demonstrate itself competent to remove as “polluted” (their term) this very discussion? Is there another word for that than censorship?

Let’s move on to consider the chicken pox protocol when many of us were children — to allow it to run its course, with mostly no harmful sequelae, and then immunity for life. Whereas now vaccination is the norm, and the vaccine does not provide immunity for life, only a probable attenuation of disease if it is contracted in adulthood (i.e. long after the last booster).

So instead, and again — now it is arguable that we have a *more* fragile situation of herd immunity, not a less fragile one. And this time not compared to 1957, but compared to the ’80s or so.

Consider this:

The World Health Organization recommends routine vaccination for chicken pox *only if* a country can keep more than 80% of people vaccinated. If only 20% to 80% of people are vaccinated, what may happen is that more people end up getting the disease at an older age, meaning outcomes overall would *worsen* in comparison with not vaccinating at all (shingles afflicts adults more severely than chicken pox does children.)

In other words, total buy-in or else.

Seems like a pretty good deal for whoever stands to profit from making the vaccine, no?

It’s worth being aware of the incentives, at least.

And it’s worth wondering too whether the near-complete dependency in the 80%+ model creates a population with its own, new sort of vulnerability to disease — vulnerable to changes in an inflection point likely to be in the control of the small handful of actors — what some people call “big pharma.”

Lest you are under the impression that criticism of “big pharma” is solely the province of chatter on facebook groups:

Doctors Without Borders / Médecins Sans Frontières (MSF)

(…that is, the Nobel Peace Prize-winning, redoubtable international medical humanitarian organization …)

Anyway, MSF *rejects* donations of vaccines from “pharmaceutical giants” (their words) on the grounds that:

“Donations of medical products, such as vaccines and drugs, may appear to be good ‘quick fixes,’ but they are not the answer to increasingly high vaccine prices charged by pharmaceutical giants like Pfizer and GSK.”

Once again: how does this bear on a consideration of what *truly* produces lasting herd immunity? (Hint: it’s not, as they say, “quick fixes” — especially not if they’re costly and patented.)

… And to consider an end-case:

Should we continue vaccinating for smallpox?

The CDC no longer recommends routine vaccination.

Why not?

If vaccination is nothing but an augmentation of health, why not continue to recommend it in this case?

The reason is because the benefits no longer outweigh the costs at the population level, and there *are* costs.

Risks, and also just plain old costs.

So that’s what happens at the level of populations — but what about at the level of individuals?

What if you as an individual have a very low risk of encountering a disease, much as the population as a whole now has a very low risk of encountering smallpox?

Could it ever be rational for you to adopt with respect to yourself the approach that the society has now adopted as a whole with respect to smallpox — i.e. now that it’s well-contained, we don’t need to voluntarily take on the intervention, with its concomitant risk & cost?

I am not entering an answer to this question. I acknowledge that a countervailing interpretation of this is as a free-rider problem — especially in proportion as vaccination is taken to be single-factor causal of the decreased risk of encountering the disease. What I *am* saying is that we have here an example of a scenario in which there *is* a rationale, that *has* been authoritatively sanctioned, for *stopping* a vaccination protocol that used to be the done thing, because of changing circumstances.

As an aside, notice that we would no longer consider it rational to build a moat around a mansion or a rampart around a city. (How about a wall around a nation??) This is because the sort of threat that this (costly) intervention once hedged against has been 1. attenuated (partially diminished due to other factors of progress), 2. obviated (no longer regarded as the “best practices” way of addressing the problem, however “clear and present” the problem may or may not remain.)

The proposition that although rates of many “vaccine-preventable” diseases began their modern declines in advance of vaccination for them, that even so what is *now* chiefly responsible for their not *remaining* salient threats is current high levels of vaccination — is difficult to evaluate.

As we’ve noted, vaccines *do* do what they say they do, which is mimic the body’s natural creation of specific immunity, to a degree. So — that a recently vaccinated population will show lower rates of disease in comparison to a “control” is not in question — and the non-controversial demonstration of this forms the conclusion of many people’s thinking about the matter. QED, they think.

But I am by no means the first to note that history is *not* such a controlled experiment. Human progress is the most confounded of all phenomena. So, the above proposition requires some care to be investigated empirically at the whole-population level — and particularly to determine *overall* consequences for human health and flourishing, which is not an empirical question alone, but necessitates a value judgement somewhere in there, at some stage of the analysis. Which outcome is truly *better*? You may say “That’s easy — not having suffering is better than having suffering.”

As a low-resolution summary statement, I would not contest this, although neither would I rush to affirm it as a moral slogan, nor take it to be a sort of “occam’s razor” for slicing through all ethical dilemmas. (It contradicts “No pain, no gain” for instance.) To consider this in depth would be a significant digression — however much I might prefer that edifying topic to this! I would simply note here that our philosophical and religious traditions remind us variously that our mortal condition entails some degree of suffering no matter what you do (as well as entailing death, by definition), that this inevitable suffering, under the right relation of our outlook to it, can function to purify us, and that the very desire to totally escape suffering is, rightly seen, its deepest cause.

The point here is that this cannot be the only tool in the box. Notice that *strictly* on its own, this simple slogan would recommend not even showing up for the injection! Obviously there needs to be a counterbalancing consideration of the long term, of net benefits over time, of “delayed gratification.” (sorta. not exactly “gratification”… “delayed / eventual robustness”?)

Here is the straightforward porting of this slogan into this topic, where it is equally deceptively satisfying:

“Not having suffering is better than having suffering.” ->

“Not having disease is better than having disease.”

Again, looks good, on face. (I would certainly not want to have to defend the alternative by recourse solely to short sound-bytes, such as in a “presidential debate”.)

But consider: Are we to say, as a corollary of the latter:

“Whatsoever results in fewer symptoms at the time, do it.”

? I don’t think this quite does it, in part for the very same reasons given in the small philosophical digression above.

But even practically speaking — for example, to continue in evaluating the “overall consequences for human health and flourishing” I brought up earlier: Let’s say we were to remove vaccines from the equation suddenly today. This would probably have a near term result of a higher incidence of disease, as we have observed.

But then — what about the kids who got chickenpox (i.e. experienced a “near term higher incidence of disease”), versus the kids who were prevented from getting chickenpox by early vaccination that “protected” them in infancy, but wore off by school age (“whatsoever results in fewer symptoms at the time, do it” [i.e. early vaccination = *very little* chance of chicken pox as an infant, instead of *some* chance then and *some* chance now])? Who was “healthier” at the end of it — in the sense of more “eventual robustness” — *as well as* in the sense of not having attained their robustness at the cost of unconscionable risk? (I address the question of particularly vulnerable individuals below.)

For now, I note that the proposition being considered here — that removing vaccination *now* would lead to big picture bad outcomes *now and later* — is, where it is accepted, the source of much hand-wringing on this subject — i.e. premonitions about “anti-vaxxers” causing “the return of vaccine-preventable disease.” But as we are seeing, the devil is in the details, and as a result of attaching different weights to different devils, people of good faith might disagree about what to do.

Once again — my point is about the *intelligibility* of different perspectives.

Perhaps this is the place to address a certain type of retort commonly made once this fact (that good faith disagreement is possible) is understood and its ramification taken seriously — that in an open society people will differ in their health decisions. To this inevitability, some adopt the stance that to opt out of vaccination may be within one’s rights, but that it would be a reasonable way for society to respond to deny admission to public accommodations, or for private entities to enact policies of treating individuals differently according to vaccination status. Besides whatever highly questionable means may be proposed to try and enforce this (show me your papers?) — there is also the issue, picked up on by the ACLU as mentioned earlier, of the inherent unconstitutionality of providing different levels of social service to individuals of different medical status — with the prominent example of public schooling, although not limited to that.

Sometimes there is an attempt to justify this by an allegation that those who are not vaccinated amount to “walking petri dishes” — i.e. are more than usually prone to transmit communicable disease. This rests on a few fairly fundamental misunderstandings. Most vaccine-preventable diseases are communicable only during an active phase, and most people are not infectious most of the rest of the time. If any person, vaccinated or otherwise, contracts measles, they will be equally contagious during this phase, and will be equally not contagious otherwise — which is to say, generally. Furthermore, for uncommon diseases, pathogen shedding in the immediate wake of “live” vaccination may constitute a significant portion of one’s risk of encountering the disease in public. Which of course turns the so-called “petri dish” logic on its head — although such a way of describing a human being is facile in any orientation.

Live polio vaccines are no longer used in the USA, since 2000, as a result of their occasionally causing “vaccine-associated paralytic poliomyelitis” (WHO, 2016). However, live oral polio vaccine (OPV) was still used internationally, where the benefits were thought to outweigh the costs.

But in a polio outbreak in Nigeria in 2016, consisting of four closely associated individuals, both “wild” and “vaccine-derived” polio strains were detected among people in “close contact”. This was after “more than two years without the detection of wild polio in Nigeria” (WHO, “Disease Outbreak News”, 2016), as well as after “mass-scale immunizations” with OPV, which is “easier and less expensive to administer … The problem is, OPV … on extremely rare occasions … can mutate and actually cause the disease … The viral samples found in the affected northern states were this vaccine-derived form of the virus.” (TIME Mag. 2018)

The latest on this comes from a July 2019 article in Science Magazine, one of the world’s foremost academic journals, quoting Mark Pallansch — CDC Director of the Division of Viral Diseases:

“We have now created more new emergences of the virus than we have stopped,” Pallansch says.

This realization clearly calls for a policy update. The article goes on to describe the WHO’s latest position: “To prevent outbreaks of vaccine-derived virus, WHO has declared that once the wild virus is gone, countries must stop all use of OPV … Last year, vaccine-derived viruses paralyzed 105 children worldwide; the wild virus just 33.”

The WHO has called attention to “vaccine hesitancy”, listing it as a “threat to global health” in 2019. What are we to say of the “hesitancy” of an individual towards a vaccine which last year was associated with 3X as many cases of paralysis worldwide as the wild pathogen it is intended to protect against? If such an individual were to cite “I’ll take my chances” as a rationale for their hesitancy, this would certainly be an *intelligible* position.

A study published by The Lancet in 2017 found that DTP vaccination was associated with 5X infant mortality “in an Urban African community” (Guinea-Bissau) in the early 1980s.

That is quite the finding. The Lancet, its retraction of an infamous article on this topic notwithstanding, remains one of the world’s most highly regarded general medical journals.

The study goes on to state: “All currently available evidence suggests that DTP vaccine may kill more children from other causes than it saves from diphtheria, tetanus or pertussis. Though a vaccine protects children against the target disease it may simultaneously increase susceptibility to unrelated infections.” And, “Differences in background factors did not explain the effect … DTP vaccinations were associated with increased infant mortality even though there was no vaccine-induced herd immunity.”

As a thought-provoking related finding elsewhere, “We have observed DTP to be associated with increased female mortality in several studies, and no published study has refuted these gender-specific effects of vaccines.” (International Journal of Epidemiology, Oxford University Press, 2004) (The female immune system may be more complex, or at least somehow different, than the male’s due to the need to accommodate another body within itself.)

Compare these statements by the CDC, WHO, and published by The Lancet and the Oxford University Press to a Jun. 21 2019 tweet by the Huffington Post referring to “the unfounded opinion that vaccines pose a public health risk”, tweeted in justification of content removal.

That the benefits of vaccination outweigh the costs may well be the case without the public health risks being “unfounded opinion”. The HuffPo tweet goes on to say that “Allowing them to remain does a disservice to our readers that outweighs any value as part of the public record.”

One wonders whether such a policy would “allow” these articles in peer-reviewed journals, describing certain ways in which “vaccines pose a public health risk”, to “remain” as part of the public record.

The tweet’s latter assertion is the more credible the more one doubts the ability of “readers” to encounter various “opinions”, and to decide for themselves among them. Any “disservice” only occurs to readers who are too credulous. Are there such readers? Yes. So, editorial discretion has its place. However, dear reader, it is not my feeling that “allowing” *this* article to remain would be a “disservice” to *you*.

Regarding all of which, it’s worth repeating the earlier quote from the Journal of the American Medical Association — published the *same day* as the HuffPo tweet (June 21st 2019):

“Civil skepticism in public discussions about vaccine policy can lead to productive discussion. The science of vaccinology, like all science, has uncertainties; applying science in policy entails value judgments, and people can disagree on the implications of scientific evidence. Skepticism reminds all individuals that intellectual humility is important and reinforces the value of democratic debate and transparent procedure.”

How do “transparent”, “democratic debate”, “people can disagree”, and “intellectual humility” compare with “allowing them to remain does a disservice … that outweighs any value as part of the public record”?

MailChimp, for its part, in a June 13th 2019 statement on policy changes, explains that it “cannot allow these individuals” to “spread harmful messages” — and furthermore says that “We trust the world’s leading health authorities, like the CDC, WHO, and the AAP.”

But note that every one of these sources is cited in this article as well — about which it is relevant to remember, as the same JAMA quote reminds us, that “People can disagree on the implications of scientific evidence”.

In other words — evidence is one thing, implications are another thing.

Are the “world’s leading health authorities” authorities on *evidence*, on *implications*, or are they thought to be comprehensive authorities on *both*?

Twitter’s approach, meanwhile, as described in a May 10 2019 company blog post, has been to guard against what it calls the “artificial” amplification of “non-credible commentary” — and to “prompt” and “direct” individuals to a “credible public health resource” (such as the US Dept. of HHS, one of whose reports is cited elsewhere below.) And Facebook is “exploring ways to give users more context about vaccines from ‘expert organizations.’ ” (Wired Mag. 3.7.19)

The dirigisme here is clear — “prompt” and “direct”. What remains to think about is whether any attempt at “social engineering” (i.e. targeting a shift in public opinion) is the right move in the long term — that is, sure in advance what the message *should* be (which largely takes the form here of deferring to health experts) — to thereby feel that putting one’s thumb on the scales is justified because one perspective “outweighs” the other — or *ought* to. This should require a high threshold, in my view — a high threshold which many feel is met here. It is taken as read that any *other* way of seeing the matter — other than the way promulgated by “expert organizations” — is *more* than wrong, but “harmful”, and to be removed.

Facts can be *wrong*, but it’s not immediately clear how “commentary” can be “non-credible” — unless this is taken to simply mean “not authoritative.” Is the commentary in *this* article “credible” or “non-credible”? How could either tag be applied other than by reading the article? Could it be decided a priori — perhaps by a robot or an algorithm — simply by detecting the presence of keywords, regardless of what *in particular* is said about them? (Do you think an AI would summarize this article well?) Or what about simply filtering by the institutional affiliation of the author — if none, as in my case, then flag it as “non-credible”?

If whoever is charged with determining “credibility” or “non-credibility” is “allowed” to read the article, but “our readers” are *not* “allowed” to read the article — well, doesn’t that describe *either* editorial discretion *or* the activity of a censor? To pick which one of these it is, the relevant consideration is: do online content providers exercise editorial discretion *generally* (i.e. are there any low-quality videos on YouTube? are there any bad tweets still up? does amazon sell any old books — “part of the public record” — that make assertions now known to be incorrect?) — or only *narrowly*, with respect to certain topics where there is a social engineering objective — i.e. there’s one side of a debate that people *should* be on? If it’s *narrowly*, then it looks more like bowdlerization than content curation.

There are a few more wide-frame questions here: Is it not thought to be worthwhile to comment without being an authority? Or to be exposed to such commentary? It’s true that comment sections online are often disappointing. And: Is it not allowed to express an opinion if it’s wrong? Who is to decide *whether* an opinion is wrong? Ask yourself if you hold any opinion contrary to what is recommended by an expert organization. Do expert organizations agree with *each other* about everything? If not, which one is the “non-credible” one? (An interesting example of this involving the National Academy of Sciences and the National Institutes of Health is presented in point #8 below.)

For now, here’s one last thought on “expert organizations”: There can be no doubt that the CDC, WHO, and AAP contain experts. And as well that they contain extraordinarily humane and knowledgeable individuals. But that is *not* to say that expertise in these fields contains no “uncertainties”, nor that drawing out the “implications” of scientific evidence is necessarily best carried out by a clerisy of any kind. If we are interested in the best insights, we should want to cast a wide net — to “crowdsource” them, in case there are good ones somewhere that have escaped attention, diamonds-in-the-rough.

Returning to the “petri dish” logic: What is the underlying thought process there? Is the allegation this: that it is not *at all* possible to be healthy without vaccinating? That, unless you do, your body is a risk to my body? That the native condition of your physical self is that it is a threat? That until you’ve undergone some upgrades, as the rest of us have, we’re better off without you? If so, I see this *not* simply as a natural, inevitable concomitant of confidence in modern medicine, but rather as *also* a signal of a lack of trust in those background conditions of “health” that underwrite any possibility of positive intervention — the health that is cognate with “heal” and with “whole”, and that is a gift given not only in hospitals.

I wonder not only about our inability to acknowledge our own complexity, as discussed earlier, but also (and following on from that), about a dwindling of the *reverence* that such an acknowledgement would suggest — reverence that would be an expression of our capacity to be awed in the face of whatever we don’t *fully* understand and whose wholeness we don’t consider ourselves to be *fully* responsible for creating — including (and especially) ourselves. Albert Schweitzer, the 20th century physician and polymath, recipient of the 1952 Nobel Peace Prize, famously summarized the philosophy of his life’s work as “Reverence for Life.” He wrote about this extensively, and in one place he elaborates: “The consequence of it is that [we] come to realize [our] dependence upon events quite beyond [our] control… Our dependence upon events is not absolute; it is qualified by our spiritual freedom.”

“Spiritual freedom” certainly strikes the ear as more noble and expansive than our all-too-worldly right to “informed consent” — although its recognition may be precisely what makes *any* of our rights, well, *intelligible*. In “The Triumph of the Therapeutic,” Philip Rieff (Susan Sontag’s ex-husband, incidentally) suggests that the hospital has succeeded the cathedral and the parliament as the arch-instutition of our culture — the one in which participation now confirms an essential membership. It may be that few any longer care whether you go to confession — it is no longer thought to affect the collective salvation of the community, one way or the other. To bring shame on a family is likewise an increasingly obsolete category, or to recuse oneself from “beating the bounds” of a hometown as a collective apotropaic ritual. To abstain from voting is still generally frowned upon, even as voter apathy is widely acknowledged and in some measure commonly understood. At the very least, we are less impressed by political expertise than by medical expertise. But to opt out of your “check up” — now *there’s* the thing that’s going to get us in trouble! *There’s* the unorthodoxy / skepticism / individualism that is going to put us all in jeopardy! *There’s* the field in which there is still such a thing as heresy — dangerous if believed, its very expression (online) not to be countenanced, lest it find unfortunate influence, and people end up having wrong thoughts, imperiling us all!

And by “us” is of course no longer meant our souls nor our political selves, but our material bodies and their material fate. Which are of course things surely worth attending to in themselves, it must be said again! My point is only to briefly reveal and point up some of the interesting psychological forces that may be at play. These may additionally account for the all-or-nothing, black-and-white portrayal of positions on this subject as either “anti-vaxx” or “pro-vax” — as if it is a litmus-test question of being “one of them,” as if labeling is a productive mode of relating to people, and as if there is no middle ground.

OK here’s another thing:

There appears to be some connection between pertussis (whooping cough) and asthma. Some evidence suggests that contracting, and recovering from, “wild” pertussis puts you at a *lower* risk subsequently for asthma and other lung ailments. Other evidence suggests that delaying pertussis vaccination reduces the risk of childhood asthma.

Yes, pertussis is dangerous for those under a year old, and I am not claiming otherwise. But under conditions of general good health, it is nonetheless more likely to be manageable than to be life-threatening even for infants, although very demanding. (Infant <1yr mortality from pertussis in the USA is .5% — meaning 99.5% survive. To say the least, that’s above the threshold for survival being “more likely.”) And if contracted after one year of age, it often compares in its initial symptoms to a regular cold, followed by a severe, persistent cough — harmful in some cases to be sure, but able to be substantially mitigated by conditions of general good health (and evidently ameliorated quite a bit as well by high daily doses of vitamin C), which as we have noted are partly out of one’s control, but thus also partly *within* one’s control.

Meanwhile an adult who has had pertussis, instead of having been vaccinated, is immune for a longer duration — once again — although, evidently not for life in the case of pertussis, but for circa 20 years, vs. circa 12 after vaccination. But what’s more, if the facts about lower subsequent asthma incidence, etc., are correct — such a person may be, in some measures, *healthier* than what is possible under *any* regime of pertussis vaccination. (Needless to reiterate, more durable immunity is in itself a better outcome — in all of these cases.)

Now — another interesting ethical question:

Are we to deny healthy individuals the possibility of these results and their advantages?

Once again, I am not entering an answer to this question. I acknowledge it as a troubling ethical dilemma, upon which many scientific facts impinge, but do not and cannot collectively suffice to settle.

Intriguingly, some of the other putative positive outcomes of illness, other than immunity, are by no means as intuitive as a connection between whooping cough and asthma. For instance, a peer-reviewed paper in the Journal of Pediatrics reports that “dramatic remissions of nephrotic syndrome frequently occur in the week after onset of natural measles.” This was studied during winters in Chicago in 1947 and 1948, with plenty of cases of “natural measles”, as they say — resulting in “dramatic remissions”, as they also say. Why should measles have a “dramatic” beneficial effect in improving kidney function?

More generally, there is the “hygiene hypothesis” — that is, that immune systems, given too few challenges, like bored teenagers turn wayward and begin to act against their hosts in autoimmune and inflammatory diseases, which are on the uptick. Obviously this is related to the “old friends hypothesis” mentioned above, and is essentially its negative — i.e. what happens when your friends are missing. Cancer, too, has an element of immune dysfunction, it would seem, in that the cleanup crew gets tricked into letting the dysfunctional cells remain.

I hedge here a bit in admission of not being well-informed on these points. I’m sure there is a relevant literature. However, I’ve already addressed certain concerns arising from deference to experts, one of which you would have to be to master such a literature.

And by the way *no-one* can have mastered that, as well as this, as well as the other thing, as well as knowing all the “best-practices” of morality, policy, individual difference, etc. So, as I explain a bit more about below as well, there inevitably does creep in a role for heuristics, personal judgments, etc.

7.

Now, certainly there are vulnerable individuals for whom “just get it and let it run its course” is bad advice, if not cruel.

We are obliged to care for the “the least of us,” although most of us do not do it directly. Instead, we rely on public health policy to consider in balance the most effective allocation of attention and resources. In this regard, it would be wise of us to focus on leverage points where small tweaks and regulations can make a big, systemic difference — as opposed to granting heavily lobbied concessions to professional “health industrialists.”

It always gives me pause when large national drugstore chains advertise their flu vaccines, and yet mainly sell disgusting candy and plastic crap — let alone cigarettes, if they do.

Where is the health in this, really?

By the way — the production not only of cheap consumer items but also of pharmaceuticals, including some vaccines, has been outsourced to China, where production costs are understandably lower, but inspection by the FDA is more dificult, and where there have been “in recent years … scandals over drug and food safety.” (WHO Bulletin, 2014)

The WHO Bulletin continues:

“A major challenge for China’s vaccine industry is to overcome concerns about product safety … In 2007, the former head of the Chinese State Food and Drug Administration was sentenced to death and executed, after being found guilty of taking bribes and failing to ensure the safety of drugs and devices approved during his tenure.”

Here’s the FDA on some of the challenges, in its “Guide to Inspections of Foreign Pharmaceutical Manufacturers” (2014):

“The majority of foreign inspections are pre-set and relatively tight time-framed. Unless the inspection is prepared in advance and sharply focused, it is difficult to meet the objectives of the program satisfactorily.”

“Another factor to keep in mind is that the authority to inspect foreign drug facilities does not come from … the Food, Drug and Cosmetic Act (the Act,) but from the agency’s ability to exercise … commitments made by the sponsors of applications, if applicable. For that reason, the agency is not required to provide stringent documentary evidence to establish violations of the Act.”

“As a general rule, sample collection is not required during inspections at foreign facilities.”

It is incumbent on us to be specific about who we are referring to when we consider a category of specially vulnerable individuals — those whom we would intend to protect primarily through “herd immunity,” rather than that they themselves be vaccinated. It would seem that one natural marker for such a category would be those who are “immunocompromised”. This is no doubt a relevant factor; however, even for those who are immunodeficient through HIV (human immunodeficiency virus), the CDC nonetheless recommends vaccination as “especially critical”, subject to a determination of the individual’s T-cell count.

Additionally, vaccination is also recommended before or after cancer chemotherapy and radiation therapy — only not during, because of the immunosuppresive effects of these therapies. And for immunosuppression related to organ transplants, live vaccines are not recommended during anti-rejection therapies, but other “inactivated” vaccines may be administered.

In summary — certain of the individuals who we naturally think of as immunocompromised *do* in fact receive many officially recommended vaccinations nonetheless.

What is the only *universally* contra-indicating condition for vaccination?

“Severe allergic reaction (e.g., anaphylaxis) after a previous dose or to a vaccine component.”

(From the CDC “General Best Practice Guidelines for Immunization”)

Anaphylaxis (“severe allergic reaction”) is an unfortunate — and uncommon — result of vaccination. A reasonable question is: what predisposes an individual to it? In other words, what are the antecedents in the causal chain leading to this unfortunate and uncommon result? That is a question amenable to its own investigation, of course. And, whatever the answer is, should not *that* be added to the list as well, wherever “severe allergic reaction” is listed as a contra-indication (that is, universally) — in order to possibly *prevent* iatrogenic anaphylaxis, instead of being reduced to only *responding to* it after the fact (that is, once it has happened, to only thereafter determine not to provide that individual with any *more* occasions for anaphylaxis)?

I am sure that parents of children who have experienced this would have preferred such a realization “ex ante” to the one “ex post”, and would likewise be receptive to a generally critical stance vis-à-vis iatrogenic harm. They would quite reasonably insist that we do better here.

OK — now let’s consider polio.

Commonly, fears of the iron lung are brought to bear in discussions of vaccination.

(It’s worth noting in passing that the most popular photograph for this, depicting a large number of iron lungs spread out in a gymnasium, does not depict a representative care facility, but was staged for a film.)

But as we are rightly moved by lives tragically altered or ended by this disease, let us simultaneously be aware that:

Polio is *totally* asymptomatic in 70% of cases.

And results in minor illness with full recovery in a remaining 25% of cases.

And enters the nervous system in only about 1% of cases.

And results in paralysis in 0.1–0.5% of cases (1 in 1000 to 1 in 200).

And of these, “many persons with paralytic poliomyelitis recover completely.” (CDC)(So, 1/500 would seem to be a reasonable ballpark figure for the rate of permanent paralysis in case of polio infection — a similar rate, evidently, to that of mortality for measles.)

And “cases” means among people who *are infected*.

And those who are infected (carriers) are a *subset* of those who are exposed. (What accounts for exposure without infection in the absence of immunization, by the way?)

And exposure is (now) *very* rare — essentially non-existent in most of the world. (Although it does persist, as we have seen, proving challenging to eradicate in places where very low indices of human development prevail — such as in parts of Pakistan, Afghanistan, and certain regions of Sub-Saharan Africa — despite the “mass-scale immunizations” referred to earlier.)

It may be material to consider to what extent widespread vaccination *alone* is responsible for the modern rarity of exposure in general, as it has occurred in combination with other factors like improved sanitation, nutrition, and emergency response capability over the same period. Large topics, maybe not germane to re-legislate here. But you can easily find evidence to show that rates of many diseases began to decline in the 20th century *before* widespread vaccination, suggesting that other factors at least played *some* role.

Here’s the CDC on tetanus (again), for example:

“In the United States, reported mortality due to tetanus has declined at a constant rate since the early 1900s… The widespread use of tetanus toxoid–containing vaccines began in the late 1940s… Several factors have contributed to the decline in tetanus morbidity and mortality.”

And here’s the US Dept. of Health and Human Services on mortality from measles, in a document on vital statistics through 1960. The document was produced in 1968, when the department was called “US Dept. of Health, Education, and Welfare”:

Death Rates for Measles (Per *100,000*): 1960 = .2 (much lower than the 1/500 cited earlier); 1950 = .3; 1940 = .5; 1930 = 3.2; 1920 = 8.8; 1910 = 12.4; 1900 = 13.3

Again, the first measles vaccine was licensed in 1963, and those older than 6 at that time are presumed to have encountered wild measles. Evidently such an encounter became quite a bit less deadly over the 60 years *prior* to widespread vaccination, so that a parallel assessment seems appropriate (as for tetanus above): Several factors have contributed to the decline in measles morbidity and mortality. (“Several” meaning “including vaccination,” but only after 1963.)

What I’m saying isn’t “And that’s what happened for sure.”

What I’m saying is — there is an intelligible position there.

Polio causes real suffering, real debility, and is to be lamented, avoided, and eradicated as a cause of human suffering. The vaccine plays a role. But we cannot make a sane judgement about what to do without facing all of these facts squarely. A *very large* majority of people exposed do fully recover, vaccine or not. This does not deny the very real stories of those who do not.

Meanwhile, some research shows intriguing, if saddening, correlations between polio incidence and pesticide use. Polio has been known since antiquity, but there had never been an epidemic until 1887, about a decade after the invention the first mechanical crop sprayer. And, over the ensuing century, incidence of polio in the United States has tracked quite closely the use of certain neurotoxic pesticides, including “paris green” (used widely at the time on tobacco), lead arsenate, and DDT. This is notable epidemiologically — but also consider, anecdotally, the related fears of summer and swimming pools, when pesticide use and runoff would have been at their annual peak.

There is more here than simple correlation — a plausible etiology related to their known toxicity to the human nervous system, for which reason they are all now banned in the USA as insecticides. DDT in particular is known to cause lesions on the spinal cord — the “polio” (“grey”) on the myelin. And, upon acute poisoning: “Pain in the joints, generalized muscle weakness and exhausting fatigue are usual; the latter are often so severe in the acute stage as to be described by some patients as ‘paralysis.’ ” Rachel Carson’s 1962 “Silent Spring” shed light on many of the *environmental* impacts of these pesticides — but even today, we often forget that the worst impacts of unsound farming practices are felt neither at the supermarket nor at points where nature has “troubled herself to be scenic” (Wendell Berry), but by agricultural workers and rural communities. (Rachel Carson dedicated “Silent Spring”, by the way, to Albert Schweitzer, whose “Reverence for Life” we noticed earlier. In her dedication, she quoted him from a letter: “Man has lost the capacity to foresee and to forestall. He will end by destroying the Earth.” The letter wasn’t to her, but to a bee-keeper whose bees were harmed by pesticides.)

Additionally, some studies show that a high-starch diet may be correlated with susceptibility to some of polio’s more fearful symptoms. The specifics matter and are worth attending to — but generally speaking, as usual, it is at least plausible that the populations who are the the most susceptible overlap with those who are the most poorly nourished and least otherwise healthy — factors which themselves are amenable to being addressed.

And it’s worth mentioning as well that these factors may be as a consequence of whatever reason, be it food injustice, individual dereliction, or simple happenstance. For instance, I’m aware FDR was monied. (That’s the point of comparing statistics with stories, no?) It doesn’t need to be stressed that wealth need not *necessarily* correlate with health, though it often does, historically and in broad strokes. But it certainly may occur, under local circumstances & by the vagaries of custom, that it is the poor who are eating certain nutrient dense foods, by default. “Let them eat cake?” Nah, they can’t afford it. Think of the fact that white rice is more suitable for tribute / tax / transportation / storage — due to not going rancid — whereas brown rice has to be eaten on-site, by the peasants themselves. Lo and behold, the very part that goes rancid in fact contains more B vitamins. And so the nobles received their nutritional comeuppance. Anyway, there are certainly such things as “diseases of affluence”, and polio was once thought of as one such “rich man’s disease”, in that it tended to strike wealthier nations at first. The very first ever polio epidemic was in Sweden.

So in light of this, let’s ask the auxiliary question: where is the best use of a public health dollar?

Lining a pharmaceutical company’s pockets who produces a vaccine for use on *everybody*, 95% of whom are not likely to be indisposed even if they contract the damn thing?

Or what about nutrition education and a better farm bill — perhaps shifting subsidies to small, diverse family organic farms, with their many other well-integrated concomitant benefits & “positive externalities”?

Would this overall approach be *more* or *less* likely to “catch” folks in the remaining 5% of vulnerability and improve their outcomes, as against vaccinating the total society to improve those outcomes? It’s impossible to adjudicate definitively, but worth considering.

I can foresee being pilloried on this point, as if organic food is AGAIN cited as the panacea. I’m not saying *that*, and as for what I *am* saying — I’m simply asserting the meaningful, tangible health benefits of insisting on a less colonial relationship between America’s cities and farms! Or blue states and red states, if you will.

A general observation about health statistics, by the way. Population-level incidence of any measurable public health parameter (like disease outcomes) needs to be suitably inflected before being taken to apply to any individual — such as yourself. “Such and such percentage die of lung cancer” is not equally applicable to smokers and non-smokers. Obviously the percentages vary between the two groups, and significantly.

But the same is true of *any* variable. Suppose person X is in the upper half of the distribution in terms of proper nutrition. In that case, “Such and such percentage die of cause Y” — where Y is *any* condition whose outcome varies with nutrition, which is nearly everything — needs to be *inflected* before it applies to person X. Suppose of the total mortality from that condition, 25% come from above the median for proper nutrition, and 75% from below — then the incidence relevant to person X would be 1/4 of the general statistic. Meaning, person X (are you above the median in terms of proper nutrition? then person X could be you!) should be 1/4 as concerned about dying from that as the generalized statistic would lead you to believe.

It is lamentable, but not surprising, by comparison, that recovery from surgery varies quite a bit according to socioeconomic status. It shouldn’t, but it does, and that’s because of a number of background factors. I suspect that the quality of the surgeries does not vary nearly so much as the quality of these factors. And so, in order to improve outcomes, which would have the bigger impact — to further improve the surgeries / to upgrade the medical intervention, or to improve the background of all interventions — that is, to improve general conditions of health?

As another example in considering differential risk: Suppose there were a vaccine to prevent HIV. This would unquestionably be an important medical advance. But, should it be recommended uniformly to all individuals, or would it — like currently available “pre-exposure prophylaxis” (which phrase is not intended to describe the “hormetic” mode of vaccine-induced immunity, and yet it *is* a reasonable description thereof as well) — rather be recommended only to people who are at “substantial risk” (CDC)?

Lest you see the answer to this as self-evident, keep in mind that vaccination for Hepatitis B, which similarly is contracted sexually, by blood, or by contaminated needles, is recommended to *all* neonates born in American hospitals, on the day of birth, regardless of the mother’s own Hep B status. This vaccine in particular is one common focal point of “vaccine hesitancy” on the part of new parents.

Rabies vaccination is not universally recommended, but recommended according to risk categories (veterinarians, travelers, even frequent spelunkers!). And should a vaccine for ebola become available (which would be a meaningful breakthrough), it would likewise probably be more strongly recommended to those living in or traveling to affected countries.

The important point is that differential uptake of vaccines, among populations with differential characteristics, has various legitimate justifications, even in the case of diseases that are communicable and can have significant consequences for public health. (“Differential uptake” means “not everyone”.)

8.

Meanwhile, in its early phases, from 1955 to 1963, 10–30 million (!) Americans were given polio vaccines co-contaminated with a rhesus monkey virus (these are the host animals where the vaccine is prepared) that is associated with brain and bone tumors in humans.

This is by admission and investigation by the NIH, not by conspiracy theory.

Other study — such as that published by the Oxford University Press, in its “American Journal of Epidemiology” — suggests even higher numbers, that up to 90% of children and 60% of adults in the USA during that time were exposed to the virus by inoculation. (OUP, 1976)

Epidemiological study by the NIH has established no direct correlation between receiving that virus by vaccine and developing the same cancers.

However, the National Academy of Sciences — in an example of the *plurality* of “sciences” — stated in a review published in 2003 by the National Academies Press in Washington, DC:

“All of the studies that the committee reviewed concerning cancer incidence or cancer mortality and exposure to polio vaccine containing SV40 have substantial limitations.”

And, to begin the final paragraph of 75 comprehensive pages reviewing the literature — particularly the very same “epidemiologic studies” cited by the NIH:

“However, because these epidemiologic studies are sufficiently flawed … in light of the biological evidence supporting the theory that SV40-contamination of polio vaccines could contribute to human cancers … the committee recommends continued public health attention.”

Hmm.

Furthermore, the front matter to this publication states:

“Support for this project was provided by the Centers for Disease Control [CDC] … and the National Instututes of Health [NIH]…”

but:

“… The views presented in this report are those of the Institute of Medicine Immunization Safety Review Committee and are not necessarily those of the funding agencies.”

Well, that independence is good for science, right?

Here’s what they say to conclude the report:

“The committee concludes that the biological evidence is moderate that SV40 exposure could lead to cancer in humans under natural conditions.”

and:

“The committee concludes that the biological evidence is moderate that SV40 exposure from the polio vaccine is related to SV40 infection in humans.”

So, there is healthy, above-board debate among scientists on this topic. The progress and results of such debate matter. There is surely such a thing as revision in light of new evidence and argument, and such a thing as being wrong.

In the meantime, a reasonable stance towards all this for a layperson is interest, inquiry, skepticism, and an open mind. This is is the sort of topic that understandably gives one pause, considering for instance the similar simian->human vector of other epidemic viral diseases.

Meanwhile, the sort of person who takes statistical reasoning to be the *only* sort of reasoning when it comes to public health — and who, unlike the scientists at the National Academy of Sciences, takes “epidemiologic studies” to be final says — may be rubbed the wrong way by the (unnecessary) implication that ongoing investigation indicates no possibility of real knowledge. It doesn’t indicate that. Science is perennially provisional and inherently subject to improvement. That is its strength. (See: “scientific revolution”, “paradigm shift”, etc.)

Such a person will propose — quite aptly — “Correlation does not imply causation.”

Fair enough, but speaking as a parent, consider this:

As parents we are forced, by time passing, to act with imperfect knowledge, and then to live with our actions. We’re certainly called upon to do *something* — there is no denying that children begin helpless, and we have to take decades of actions that set them up one way or another. We act *somehow* on their behalf no matter what. And we are not all researchers, and not all expected to be. So basically, the buck stops at a bottomless pit — and so *heuristics, instincts, and impressions* have their place in parental decisionmaking.

Let me at least speak for myself — I’m *somewhat* well-informed, by now you’ll agree, and yet I have to act all the time without “evidence” to support specific actions. Don’t you??

And so, on the level of *heuristics, instincts, and impressions* (which should obviously NOT be the *only*, nor even perhaps the *main*, level of consideration…)

But in any case, on that level, what do you do with a fact such as “up to 30 million Americans received co-contaminated polio vaccines”?

My point is that an attitude of circumspection is an understandable “gut” response here, rather than such a response being comprised of nothing but ignorance of the rationale behind the mainstream scientific view about the safety of vaccines.

And, that “gut” responses inevitably count for *something*, because we don’t and never will have all the information to act only rationally, as if each of us were Laplace’s demon.

I am not advocating the position adumbrated here as the only way to support people’s health, nor the only pathway towards the protection of *all* vulnerable people from disease. As for the latter goal, neither the status quo, nor any condition of public health, could conceivably achieve it. And — I *also* acknowledge that the current order has us healthy by historical standards — in some ways. And for *those* ways, I’m thankful! These are a few reasons why I myself am not an “anti-vaxxer”.

9.

No person of good faith is saying that vaccines have *no* risk.

In fact, there is an official government agency tasked with shelling out compensation to aggrieved parents whose children have suffered injury as a provable result of vaccines — the “National Vaccine Injury Compensation Program.” And this agency has forked over *billions* of dollars over *thousands* of cases.

Created in 1988, it has paid $3.9 billion in claims as of Sep. 5th 2018.

(This is from its own internal public report.)

It was created by the government “after lawsuits against vaccine companies and health care providers threatened to cause vaccine shortages.” That is — it protects vaccine companies from liability risk — i.e., subjected to the possibility of being sued by classes of harmed parents, the profit-motivated pharmaceutical companies sought out legal protection from liability as a condition of bringing vaccines to market, and the government agreed.

At least consider — if vaccines are “safe,” just what harm are these settlements being paid *for*?

And further, what is the rate of harm that is unprovable and so doesn’t make it to court — due to being less severe, late onset, more general, etc.?

And aren’t just those sorts of chronic, etiologically puzzling ailments precisely what are on the uptick in the world’s more affluent societies?

In 2015, a Stanford professor “found that the federal Vaccine Injury Compensation Program has not lived up to its original goals of providing ‘simple justice’ to children injured by vaccines.”

The article (concerning her paper published both in the UPenn Law Review as well as Stanford Public Law), continues to say that:

“In her research, she analyzed nearly three decades’ worth of data concerning the program’s operation.”

“The results are discouraging,” she said. “Despite initial optimism in Congress and beyond that such a fund could resolve claims efficiently and amicably, in operation the program has been astonishingly slow and surprisingly combative.”

“Engstrom found that even when children are found to be entitled to compensation, governmental lawyers have sometimes hassled petitioners over relatively piddling amounts.”

I think we can agree that, to the extent her findings are valid, they suggest a level of dysfunction that we ought to find troubling.

This consideration of what children are “entitled” to also brings up the question, indirectly considered in a few places earlier, of children’s rights in general. Inasmuch as children have a “right to the highest attainable standard of health” (UN), every fact that bears on what that standard is — such as what is presented in this very article, the facts together with the reasoning about them — can be taken as relevant in determining how to go about securing that right. If health is a right, then the necessary conditions of health are a right, whatever they are — and this “whatever they are” is in fact a succinct gloss of a tremendous amount of important — and productive — disagreement. Suppose the position presented in this paper is “right” — or instead suppose it is “persuasive to some extent” or “worth considering”. (I think it is at least the last of these.) Then, we would want to be very careful not to jump from a “right to health” directly to specific recommendations — specific, but universal (in the sense of “universal rights”).

But, regarding the health of individuals, is the application of a universal rule what is most salutary? A point to keep in mind on this topic overall: If parents do their own “research” of whatever kind (“Your google search does not replace my medical degree” notwithstanding…) — at the very least, whatever they are doing is *proactive*, and therefore not well-described as *neglect*. Many scientific papers *are* in fact online these days, and that amateurs would take an interest in them would in itself seem to be a *good* thing, generally speaking.

Obviously there is a risk of confirmation bias and ideological siloing, which are equally enabled by the internet and are simply the other side of the coin of access to information. But it is by no means self-evident that these problems are straightforwardly corrected by accreditation and professionalization. Sometimes the emperor is wearing no clothes. Sometimes it does take an independent Rachel Carson — or an Erin Brockovich — to state a somewhat personal, contradictory perspective, to “speak truth to power”. Sometimes such “truth” is surprisingly well-researched and well-spoken. How do we square skepticism of the amateur against the pithy bumper sticker quote by Margaret Mead: “Never doubt that a small group of thoughtful, committed citizens can change the world; indeed, it’s the only thing that ever has.”

Are we to put this sentiment aside in order to discredit lay opinion and second-guess intellectual voluntarism?

When it comes to the putting into practice of an ideal like “highest attainable standard of health” — it’s the sort of open-ended definition that makes an attempt at universality by meaning different things in different contexts, by design. Does it mean the same thing in different countries? Does it mean for our parents what it means for us?

And so where the rubber hits the road, there will always be a role for local knowledge of local circumstances — all the way down to the individual’s unique knowledge of himself, herself, his or her children, etc. We are all harbor pilots who board the cargo ships of broad strokes policy on the last legs of their journey into our lives, to help them, with our own irreplicable self-knowledge, navigate the local port conditions safely.

The CDC, in its general document on vaccine contra-indications, says:

“Benefits of and risks for administering a specific vaccine to a person under these circumstances should be considered. If the risk from the vaccine is believed to outweigh the benefit, the vaccine should not be administered. If the benefit of vaccination is believed to outweigh the risk, the vaccine should be administered.”

Can there be such a thing, in light of this (i.e. a fine-grained recommendation to consider carefully the details of individual difference) as a “universal right” to be vaccinated?

As another example of the necessity of taking context into account, the CDC, in May 10 2019 updates to its “General Principles for Vaccine Scheduling”, states: “Major changes to the best practice guidance for timing and spacing of immunobiologics” include “guidance for simultaneous vaccination in the context of a risk for febrile seizures” and “clarification of the use of the grace period between doses of MMRV.”

And as a final comment on the role of children in a conversation like this: Nassim Nicholas Taleb (authof of Antifragile, The Black Swan, etc.) is at work as we speak on an interesting, evolving online document he calls “Principia Politica”.

I will roughly paraphrase its eighteenth article here as: “Let’s be reluctant to use references to children in arguments, given our naturally emotion-laden response to them.”

(NB: I was subject to an attempt along these lines in a recent podcast interview. I wish I had remembered this eighteenth article of the “Principia Politica” in time to respond with it on the spot. “L’esprit d’escalier”…)

Anyway, I think most readers will agree with me that serious consideration of all facts should be on the table, but that nonetheless it is true that coffins make poor soapboxes.

(By the way, much of the rest of the “Principia Politica” is *also* quite relevant to points made here. For only one very obvious instance, the title of its seventh article is “Iatrogenics.” There are *many* more.)

10.

Now a set of preliminary thoughts about a general axiom.

I question the Koch postulates (“diseases and germs are one-to-one”, in my summary) — NOT in-themselves, but as the correct top-level orienting paradigm for understanding human health and its conditions.

Here’s what I mean:

Do microorganisms exist? Yes!

Do I think that that in carrying out their life cycle some of them infect humans, treat us as hosts, and cause disease and suffering? Yes!

Do I think that we have had a great deal of success in employing this knowledge to interrupt their ability to cause us harm? Yes!

Do I think that the most useful way to conceptualize the human organism is as an entity constantly under siege by vectors of pathology — assailed by germs — to which our primary relation should be avoidance, or attempted elimination?

Hmm. I’m not so sure. I think an analysis (= reduction) of human biology in terms of mechanism is extraordinarily useful. And I also think such an analysis will tend to be hard pressed to account for vitality in itself — or “life force” if you’re willing to consider that there might be truth in poetry.

“It is difficult

to get the news from poems

yet men die miserably every day

for lack

of what is found there.”

-William Carlos Williams [a doctor!]

In connection with this, consider again that for many micro-organism-mediated pathologies, there are large proportions of the population who “carry” the germ asymptomatically. Tuberculosis, for instance, remains totally dormant in 90% of carriers. Why? And that “asymptomatically” means, in the case of the individual concerned, totally irrelevantly for concerns of human flourishing.

So what is it that accounts for the possibility of exposure, and indeed infection, with no illness or sequelae in many individuals?

If there is a percentage of the population — again, often a quite substantial percentage, sometimes “most” — who are unaffected when confronted with the same “germ”, what accounts for an individual landing in that percentage? Dumb luck surely plays a role, in the form of genetics and other happenstances of birth. But factors of general health *also* surely play a role, and are actionable upon.

Koch himself abandoned one of his postulates after he discovered the prevalence of the asymptomatic presence of supposed “pathogens” in healthy individuals, and today the more nuanced and comprehensive Hill’s “criteria for causation” are used.

But it nonetheless remains the view of the average scientifically literate layperson that for every disease there is a cause, and for infectious disease the cause is an identifiable pathogen. In other words that germ theory comes close to fully explaining infectious disease.

If that is true, then wherefore the 90% asymptomaticity of TB infection, and the 70% asymptomaticity of polio infection? The germ’s there — where’s the disease?

There is a need for subtlety with respect to “causation” here — for example, in case of differential disease outcomes in people with differential general health and immune function:

Is one’s state of immune function the “condition”,

and the pathogen the “cause”?

Or is one’s state of immune function the “cause”,

and the pathogen the “condition”?

I will note in passing that there may be profound clinical benefit to adopting “methodological vitalism”, as well as a processual, rather than an entitative, view of the organism and its health.

For instance, what is the placebo effect, and why is its effect so strong? If the response of the body can be totally accounted for by its physio-chemical “inputs” and “outputs”, then the placebo effect is quite odd. (Not to mention the demonstrated positive effect of faith on cancer remission outcomes!)

In fact, the body is more than a machine, and so its well-functioning has to do with more than its physio-chemical circumstances.

So, then the question is — in light of this, what actions should we take to support health?

Strengthen (“vitalize”) the human organism to face its challenges,

or/and attenuate the dangers of its environment?

Definitely both, I say!

But equally definitely — if I had to pick one, I’d pick the former. And for that reason, I recognize certain limitations to germ theory — not insofar as it is a correct map of reality, which it is — but rather insofar as it is taken to be sufficient as a total orienting paradigm for human health.

Because it frames the “problem” of health in the latter’s terms — in terms of sterility, safety, and risk reduction. All of which I strongly support in appropriate measure and context. And yet, all of which can be — and regularly are — exaggerated as aims, to the possible detriment of strength, resilience, and vitality.

Let me wrap up by saying that although I’ve done my fair share of legwork, I am by no means as knowledgeable as those who are the most so, although I do, by the same token, have the advantage of no professional bias.

I’m aware that this is less a watertight edifice of analysis and more a “whoa there” screed.

I have been sitting on these ideas for some time, uncertain of their value to anybody else. But in seeing conversation on this topic begin to spiral out of control, and into recriminations, it has lately become clear that if I can possibly contribute to sanity, clarity, and creating more robust perspectives on this subject, then I should at least make the attempt.

I do not support the accusatory tone of the following statement, nor its superlatives. (“RT ≠ endorsement.”) But it may be worth being aware that it was said, and by whom:

“Vaccination is a barbarous practice and one of the most fatal of all the delusions current in our time. Conscientious objectors to vaccination should stand alone, if need be, against the whole world, in defense of their conviction.”

-Mahatma Gandhi

I hope you will agree that this essay contains ethical reasoning on this topic that is to be found in few other places — and that is what I think I contribute most importantly here.

We are not well-served by demonizing others.

We are not well-served by considering people to be stupid who are not stupid, because we deprive ourselves of the benefit of their perspective.

And we are not well-served by *telling* them that they are stupid, especially if they are not stupid, but even otherwise.

I fear that the challenges presented to the culturally dominant view of health and its conditions by this “steel-manned” version of the “anti-vaxx” position are fundamental, and account for why this issue seems to have touched a nerve in the zeitgeist.

I am in favor of conversation.

Gautam Tejas Ganeshan

Written by

Two kids, two dogs.

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade