Why I got COVID-19 Wrong and What You can Learn from Me.

SUMMARY

  • When I first heard about the Coronavirus in Jan 2020 I thought it was being overblown.
  • I was one of those people who in real life and online would compare COVID-19 to things like the locust plague, car accidents, heart disease, etc.
  • Even when it escalated in China I didn’t fully understand the implications, but I should have and could have.
  • First week of March I finally dug into it more and then flipped 180.
  • What can we learn?

BACKGROUND
I am currently a semi-retired ex-game programmer/producer living in middle of nowhere Northern California. I love learning about things, am an incessant (often annoying) skeptic; and I love investing and poker as they are both activities that require emotional control, analytical thinking and have asymmetric returns.

This brief bio is significant because I should be the exact kind of person who gets this thing right. I love unusual events and I am interested in pandemics. I lived in Hong Kong and on several occasions visited the SARS memorial. I follow Bill Gates who has a whole TED talk about exactly what is happening. People I admire and respect were actively pointing out that the media, politicians and general public were substantially underestimating the “fat tail” risk. I was the person who was staring directly at the problem and just couldn’t see it.

How did I blow it?

I Drew a Conclusion too soon.

I remember first hearing about the “Coronavirus” back in January. At the time it was not a particularly large news story, and it was framed mostly in context of China. While I thought it was serious, I assumed it would (a) stay primarily in China, (b) be serious, but not effect me or my basic life, and (c) go into history as a locally significant, butnot globally impactful pandemic.

I came to this conclusion very quickly because my first exposure was to fairly sensationalized and not credible articles in my news feed predicting that this would be a world ending type of event. I see these quite frequently (as do we all) and so it biased me AGAINST this being a major event. Had I first made contact via an interview from a physician, the WHO or some other means I may had had a different bias. I can’t be sure of this, of course.

The warning here is that we must remember how susceptible we are to forming opinions based on information that may not be accurate or related (see Anchoring and Recency bias). In my particular case it was a more insidious form because I immediately rejected the information because I had judged it’s source to be non-credible. This is a dangerous error. Columbus was able to “discover” America because a mediocre sailor (Columbus)met a bad Venture Capitalist (Queen Isabela). Both parties had the wrong thesis, but there was a land mass in between which NO ONE had anticipated. So the Portuguese were correct, but overconfident in their expertise… and this was my mistake. The source can be incompetent and yet by accident make the right call.

With my subconscious mind firmly made up I proceeded to look for things that proved we were overdoing it. I needed to have ammunition against the quacks and conspiracy theorists.

It was very easy to find my prey. It looked like there was going to be the worst locust swarm in decades to hit Africa. It could threaten to starve 20–50 MILLION people and there was very little coverage at the time. As usual, my inner dialog went, the 1st world was looking at a moderate scale immediate problem in it’s own back yard while it ignored a huge problem for poor people no one cares about. This allowed me to solidify my prior bias and all but dismiss COVID-19 as having a potentially SARS like impact, but not the global pandemic it looks to become.

But I was making a fundamental error. Just because there is a locust plague in Africa, doesn’t mean that a virus can’t go global and make it worse FOR EVERYONE. I ignored the basic fact that if/when COVID-19 hits Africa WITH the swarm it could be measurably worse. Problems like viruses and locust swarms are not isolated incidents that should be compared and contrasted. They can have large compounding impacts that are easy to dismiss once we have our biases working in full gear.

I had past “experience” looking at SARS and MERS.

I enjoy reading what Bill Gates writes and listening to what he says. When I lived in Hong Kong I got very interested in SARS, it’s impact on the economy and society, it’s magnitude and speed.

But I made a few incorrect conclusions.

The HK stock exchange lost about 20% of it’s value during the SARS epidemic. In Hong Kong, SARS infected about 1750 people and killed 286. I wasn’t in Hong Kong during SARS but from what locals told me it sent shocks of fear through the city. From a purely financial standpoint the FINAL impact of the disease seemed disproportionate to the impact on the stock and real estate market. My conclusion here was that people tend to OVERESTIMATE mortal danger and thus panic and over-react disproportionate to the event.

I had foolishly forgotten the “Turkey” problem laid out well by Bertrand Russell:

“Domestic animals expect food when they see the person who usually feeds them. We know that all these rather crude expectations of uniformity are liable to be misleading. The man who has fed the chicken every day throughout its life at last wrings its neck instead, showing that more refined views as to the uniformity of nature would have been useful to the chicken.”

Essentially when we are “doing science” we can not apply the experience of prior things to conclude definitively what will happen in the future. In fact, in some cases our experience is teaching us precisely the WRONG lesson.

Consider the college student who drinks every night before taking an exam and is somehow able to barely pass. “Crude expectations” would have us conclude that drinking either enables or doesn’t inhibit adequate performance. But once the test is a LITTLE harder (i.e. the final exam, an oral exam, etc) the head is summarily chopped off.

Viruses and Pandemics are exactly this sort of problem and it is this fundamental error that is causing so much conflict between politicians and some of the public on one side, and epidemiologists and cantankerous probability experts on the other. Of course as the virus begins to threaten us personally… we see the proverbial axe over our chicken heads… we come to evaluate the intentions of the caretaker in more nuanced form.

This is pertinent because we are conducting multiple turkey experiments, right now, in real time around the world. When President Trump points out that some states barely have any cases and thus it’s likely they can reopen on April 1st he is making exactly this error. Of course it’s POSSIBLE that we CAN open places on April 1st… but the point is that looking at past data and the current situation is not sufficient for making that assessment.

I Ignored “Constructive Paranoia.”

Had I thought about this more deeply I would have instead used Jared Diamonds “Constructive Paranoia” approach. To personalize this he describes his own chances of slipping in the shower (1 in 1,000) and why that is FAR too high to take any risk.

“Life expectancy for a healthy American man of my age is about 90. (That’s not to be confused with American male life expectancy at birth, only about 78.) If I’m to achieve my statistical quota of 15 more years of life, that means about 15 times 365, or 5,475, more showers. But if I were so careless that my risk of slipping in the shower each time were as high as 1 in 1,000, I’d die or become crippled about five times before reaching my life expectancy.”

It first stuck me in his amazing book “The World Until Yesterday.” In it he discusses a group of New Guinea highlanders who are meticulously searching for the right tree to camp under. Diamond is frustrated, tired and annoyed that it’s taking so much time… after all a tree is a tree. The highlanders explain that sometimes these big trees fall at night and if they hit you, you are dead. But what are the odds? Low of course. But if you camp 10 nights in your life that’s different than if you camp every day. Additionally, if you are injured in the New Guinea highlands, it’s very hard to get any kind of medical attention even if you are able to escape from your situation. It is not enough to just consider the odds of the tree killing you; you must also consider the current situation you are in, the frequency that you roll the dice and the consequences of the low probability event. Only then can you ascertain the true risk.

What I should have learned from SARS is that there is a low possibility of an event that could be far more catastrophic. If we scale SARS up to 10 or 100 or 1000 times (and there is no reason viruses can’t do this) we can have devastating events. If we imagine SARS like events every 5 or 10 years… then what?

Consider that the Black Death occurred dozens of times in Europe between 1350 and 1500. It killed hundreds of millions of people and took centuries to recover. It also had a major hand in huge social, economic and political shifts. While some of those were good, those who died and suffered would likely prefer to have the changes come about in less costly ways. I highly recommend the book “The Great Mortality” for an in depth look.

I believe that if this pandemic is successfully mitigated too easily we will learn the wrong lesson again and it will make us precisely weak to an even more terrible one which is statistically possible. I do not think this is the “chicken killer” but it should be a warning that one could come. This is similar to the problem of large object impacts. In 100 years we are almost certain to not be struck by a giant meteor that could kill us all. In 100 million it is a near certainty. So should we build our anti-meteor strategy for unlikely events in 100 years or near definite terminal evens over 100 million?

We must come to better terms with low probability/high consequence events over time and appropriately build systems against them.

I made semi-public declarations that reinforced my internal narrative.

I put this one last, but I think it is the proverbial nail in the coffin; and writing this article is an attempt to undo my muddled thinking.

In the current environment it is both easy and compelling to make public forceful declarations. We tie our ego, identity and credibility to those statements and once made, it is nearly impossible to “undo” them. Thus when we found we have made a mistake we are confronted with either “doubling down” on it by redirecting, defending absurdity, ignoring what we said, etc. or trying to correct by admitting our mistake, apologizing to those we may have harmed and restating our case. It is far more convenient (and easier) to just never look for the mistake in the first place.

Also, the dominant communication technology is designed to prevent us from identifying and correcting mistakes in our thinking.

Both Twitter and Facebook are great at making us write definitive statement to limitless audiences with confidence and vigor. We are encouraged to do so quickly and often in an emotional hot state. We get little boosts of ego seeing people “like” “share” and “reply” to our deep insights and important statements.

Our audiences have also been “selected” to ensure that we find substantial agreement and the product designers give us substantial powers to avoid disagreement. It’s worth noting that Twitter and Facebook make it is easy to “block” things we don’t like but nearly impossible to “invite” things that disagree with us. This is a product design optimized to ensure that we feel really good about what we say all the time and are protected from discomforting and conflicting views. It gives us a sense of control and power over the world; and feeds our belief that sane people agree with us and thse who disagree must be crazy. If Nassim Taleb calls us a fucking idiot, we can mute him; but we shouldn’t even if (or perhaps precisely because) he is surly and aggressive.

It doesn’t mean we should believe him all the time either. Rather, we need to constantly have a disposition of seeking out contrary points of view and looking at them deeply; and we must do this with ever greater force to counter the forces communication technologists’ product design are subconsciously imposing on us.

How did I come around?

I came around to the reality of the risk near the last week of February. All of a sudden I felt the current that others who were sounding alarm bells had been feeling.

I started reading Nassim Taleb’s thoughts going back to the start of the year. I listened to Bill Gates’ TED talk on the danger of pandemics (as well as what he was saying at the time). I also started looking at what the experts in epidemiology and statistics (those focused on rare events) were saying.

It was immediately clear to me that COVID-19 was not “just the flu,” was materially different than SARS or MERS and was, in fact, possibly something that could be a substantial global threat. It was also clear that it was precisely because people were not treating it seriously that the threat was exponentially greater (i.e. Turkey problems).

I told my wife that I think we need to cancel all the kids’ lessons and play dates (we home school, so that wasn’t a factort). She was very surprised because just a day or two before I was talking about the locust plague, remember? Fortunately I have a great spouse who listened and went along with me since I felt so strongly about it. We had just baked a mango cake to share with lots of people. Her thinking was “so because you have this new anxiety that there’s a small chance this disease could be bad we should cancel on everyone?” My thinking was “because we want to save a cake we should risk a small chance of our family getting very sick and perhaps dying to what might be a catastrophic pandemic.” Of course this is overly simplistic, but it personalizes the larger dialog and shows how difficult it is for us when we try to practically apply certain mental models.

As I write this California has 2,266 cases and 42 deaths. Every year heart disease and stroke kill 15 million people globally. It doesn’t seem like much by comparison.

But we must look at the potential outcomes and apply constructive paranoia. If 80% of the population gets this virus and it has a comparably low death rate of 1%, that is 56 million people; making it by far the biggest single killer. We don’t know what the numbers will be, but that is not an argument to wait until we know too take action. Small changes to those numbers makes the possible outcomes anywhere from non-events to devastating. Furthermore, if that virus (or ones like it) come back 25 times in the next 200 years it will have a substantial impact on our lives and those of our children and grand children. Additionally we have the means to prevent this from happening and the sooner we react in a “constructive paranoid” way, the less costly it is to prevent it.

The New Guinea highland people figured out through painful experience that if you’re going to camp under a tree every night for 60 years, you better make damn sure it’s not one that’s going to fall, even if the odds are slim. We can learn from them.

We are imperfect beings, challenged by the limits of our evolutionary heritage and designed for problems that are substantially different than the ones we face today.

It is enormously challenging to confront our imperfections and weaknesses and learn from them… and we all have them.

Me.
You.
Everyone.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store