Covid Considerations from Santa Fe

Complexity Scientists on the COVID-19 Pandemic

Nicholas Teague
From the Diaries of John Henry
17 min readApr 5, 2022

--

Nature photography — reflections upon a lake

This blog has been largely silent on the pandemic for these two turbulent years. It is an understatement that there is a lot of brain power in all walks of public and private research devoted to the field, and the author didn’t feel he had much that was material to contribute outside of practices related to data science software, which has been his primary focus during this period. Better to reserve speech for moments of clarity and unique insights channeled to appropriate parties, that is how you can manage your signal to noise ratio. The most accessible precautions remain behavioral changes like reducing frequency / scale of large gatherings, personal protective measures like mask wearing / hand washing, and most importantly self-isolation / getting tested / seeking professional counsel in cases of emergent symptoms. Other than staying current on vaccine shots, there was not much more in general evidence based actionable steps I could counsel, and these practices are by now well known to most even if not universally followed as much as they could.

With current events taking a turn towards international conflict in eastern Europe there has recently been a noticeable downplay of the pandemic within mainstream media broadcasts, which is not wholly unreasonable given the atrocities and war crimes being systematically committed by an unprovoked aggressor. Within that context, we’re now amid the staged rollout of a second booster shot by vaccine developers, each iteration honing in on emerging variants of increased virulence. A horse race being run between engineering and evolution, the stakes the trajectory of pretty much everything.

Everyone please stay current on your booster shots. It is important.

Santa Fe Institute Logo

For those unacquainted, the Santa Fe Institute is a private multi-disciplinary research center housed in the foothills of New Mexico, holding in it’s charter the exploration of what has been vaguely termed as “complexity theory”, which more concretely refers to emergent system dynamics that have been found to arise in various kinds of boundary interactions or even at different scales of otherwise uniform compositions. Even though these emergent dynamics often have less vocabulary than the concrete equations of traditional physics, understanding characteristics of time evolutions can often be approached by framings like coarse grained renormalizations or modeled multi-agent computer simulations, which are among the toolkits that have been established for systems ranging from the geological (e.g. turbulent fluid dynamics), the biological (e.g. the flight of flocks of birds), or even the economic (e.g. market dynamics), and the institute has no shortage of divergent branches of exploration, inviting visiting professors from around the country to consider domains of their specialty under the lens of the field.

It was without hesitancy that I picked up the recently published collection of interview transcripts and lectures pooling the attention of their community towards applications in pandemic response — The Complex Alternative: Complexity Scientists on the COVID-19 Pandemic. It is a collection more geared towards establishing scientific basis for policy than granular nuts and bolts of decisions, basically climbing a few rungs up the ladder of abstractions to questions like boundaries of knowledge, unstated assumptions of models, where other channels of research may fall short. A higher ground. With a beautiful paperback binding and artwork throughout, the collection is easily suitable for a primary bookcase collection, or you can go the cheap route and pick up the whole collection on kindle for cheap ($2.99 last I checked).

Book jacket: The Complex Alternative
The Complex Alternative — David Krakauer & Geoffrey West

It was not an intentional exercise, more of a force of habit, but after completing the 700 pages I found I had left in my tracks a handful of folded corner bookmarks, which I think may reveal more about my diverse interests than any ranking of importance or the like. Still, not wanting a good collection of highlights go to waste, the following excerpts are merely that, a collection of highlights that for whatever reason caught my eye at first read-through. The only thing stopping me from just copy and pasting them verbatim to these pages is a concern about crossing some invisible line on copyright (the two of us have a tenuous relationship), so in the interest of fair use will provide a few points of commentary with the excerpts. Oh and you know, a little music and whatnot to liven things up a bit. Yeah so without further ado.

Higher Ground — The Blind Boys of Alabama

Although I do not have visibility of the implementation in practice, I expect much of the policy decisions by our political and health agency leadership is centered around goal setting and policy targeting selected key performance metrics. These metrics could include medical (medication availability, hospital utilization, death rate), economic impacts (employment rate, national debt, inflation), or even the less practical like politically motivated public perceptions and popularity considerations. A fundamental challenge for any goal targeted policy is the need to balance tradeoffs between adjacent metrics. No action is performed in isolation, there will be feedback and impacts to adjacent factors. When faced with environments of the highest complexity like those trying to balance an economy and public health, one way to prioritize measures is by identifying those metrics that are higher up the chain of causality from other considered targets, where causality chain refers to what metrics are most influential on those surrounding while maintaining the most independence from variations in the reverse direction.

In the context of the pandemic, the single most driving metric to all surrounding must surely be the R0 statistic, which refers to the continued infection rate resulting from a single infected individual, which is partly a function of virus strain properties, immunities established from prior infection or vaccine, but also environmental factors and patient behavioral characteristics. (R0>1 suggests a pandemic growing exponentially, while R0<1 is needed for the progression to fade naturally). The beauty of targeting the R0 statistic is partly that improvements are basically universally beneficial to all surrounding metrics. Outside of some theoretical attempt to quickly establish herd immunity, I can’t think of any scenario where a higher R0 value would be desirable.

However the challenge of solely targeting R0 as a key performance metric is the tractability of the measure in aggregate. Consider that R0 is describing the parameter of an exponential growth process, such that estimation is comparable to deriving statistics for a fat tailed distribution — the higher the R0 value, the fatter the tail, and the more samples needed to estimate. Fat tailed distributions are less tractable than thin tailed distributions since estimating their aggregate statistics will likely be highly impacted by a few or even one outlier sample. Further, the value of R0 realized from an individual infection is not just an aggregation of statistics of the factors like transmissivity and preventative steps, in practice each one of those factors will be in effect sampled from some distribution. These underlying sources of randomness, coupled with the extreme sensitivity of aggregate pandemic duration and intensity to small variations in the realized aggregate R0, means that our ability to forecast long term trajectories is near intractable. At the end of the epidemic, the same mechanics and governing mechanisms could have produced outbreak sizes varying by orders of magnitude, owing simply to the uncertainty bands.

This unpredictability is intrinsic to epidemic dynamics and not indicative of shortcomings in modeling. In short, forecasting ambiguity is unavoidable in exponential growth processes that underlie epidemics. — Sidney Redner, Santa Fe Institute

An easy misconception is that R0 is just some global figure that can be quoted for a state or country. The dynamics of transmission will vary considerably between local regions or demographic profiles owing to population density, behavioral, and economic factors. If all we are doing is targeting R0 on a macro scale, we could be exposed to wide fluctuations across the population, with a flareup in one region just waiting to reignite an adjacent under better control. Possibly the best way to selectively target flareups is to enforce segregation upon occurrence, this has not proven to be a very palatable policy measure in the US given appetites for personal liberties accustomed from the populace.

…it can be misleading to look at statewide or national averages and celebrate if R0 seems to be falling below 1. The epidemic could still be raging in particular places or among particular groups. — Cristopher Moore, Santa Fe Institute

If we want to make R0 more tractable, and hence more easily forecasted and controllable, the only way to do that is to find some way to clip exposure to tail events. Just because superspreader events are the exception and not the rule does not preclude their possibility being the most influential instances towards our ability to understand future trajectory dynamics. The randomness of superspreader events is not just originating from viral loading properties, it is also centrally impacted from circumstances surrounding exposure density, and thus can be influenced by policy measures or other interventions.

Much of the coverage of COVID-19 talks about R0, the average number of people each sick person infects. If R0 is bigger than 1, cases grow exponentially, and an epidemic spreads across the population. But if we can keep R0 below 1, we can limit the disease to isolated outbreaks and keep it under control. … The tail of large events gets even heavier if we add superspreading. We often talk of “superspreaders” as individuals with higher viral loads, or who by choice or necessity interact with many others. But it’s more accurate to talk about superspreading events and situations… — Cristopher Moore, Santa Fe Institute

Tractability of statistics may help with forecasting, but that on its own will not open the door to intentional steering of trajectories. Complex systems require some degree of humbleness for what can be achieved and how much control we may have on our environment, even with limitless resources.

…In the case of human social systems, there is increasingly an additional complexity. As better microscopic data on behavior become available, individuals and groups in human systems gain both the capacity to compute global information and greater power to intervene in outcomes. However, it is critical to realize that, in complex systems, greater capacity for intervention does not necessarily imply increased capacity to have a desired effect. Causality in complex systems, even with good microscopic data, is notoriously difficult. Reasons include nonstationarity, complex couplings, and concordant non-linearity, which can lead to heavy-tailed distributions and the formation of spurious associations. …even if substantial resources can be marshaled towards a given end, there is no guarantee that specific end will be achieved. — Jessica Flack, Santa Fe Institute

Part of the challenge of targeted changes to behavioral patterns at scale is that they are competing with locked in social norms. We all have been conditioned by our upbringings and lifetimes of consistency to follow certain routines and conventions. It will always be much easier to facilitate changes that are tangent to such lock-ins than full scale reversal, otherwise the energy needed to promote may be considerably higher than one may expect given the life or death stakes.

One of the hallmarks of systems with multiple equilibria is path-dependance, meaning it is far easier to move in one direction than another. … The social habits we tend to see as either the fabric of society or unintended corollaries of life — gathering at high density, shaking hands as a greeting, traveling and interacting when infectious — have become established as social norms. Path-dependance tells us that far more energy needs to be invested in campaigns to eliminate these habits than is required to perpetuate. — David Krakauer & Geoffrey West, Santa Fe Institute

Convincing others to believe a narrative and change their behavior has psychological considerations beyond simple clarity of messaging. It requires tact and framing in a manner aligned with pre-existing belief patterns. Such patterns may vary across demographic profiles. There is no single best message for everyone.

Gloom and doom is not a very stimulating thing for actually inviting action, so to say. There is lots of really good psychological research about that. … if you let people read a piece about climate change that would be catastrophic for humans, and you ask them before and after if they believe that climate change is actually true, after reading the story, more of them think that climate change is actually not true at all. So it has this negative effect on the willingness to face it. But if you add one paragraph saying, “But if we do this and this, everything is under control,” then it does not have this effect. So I think it’s really important, if we want to communicate our thinking on this, that we don’t stop with the gloom and doom but also at least make a beginning of the next step. — Marten Scheffer, Wageningen University; Santa Fe Institute

There are also social dynamics that govern our belief systems, and even in cases of strong evidence if someone’s identity is tied to questions of mask wearing or political alignment, it may require other types of messaging tactics to resolve.

…we scientists tend to believe that if the evidence says something, then you come to some conclusions, and that’s more or less how the brain works. But the brain has not evolved to help individual scientists find the truth. It has evolved to help us live in a social network. As a consequence, it is social to the core. Even reasoning is social… It evolved to justify our actions to others and to persuade others. People evolve beliefs about the nature of the world out there and their role in it. These beliefs are not just personal: they are shared, and sharing creates a sense of belonging. Beliefs are not just there to be right; they are there, in part, to belong, to be part of something bigger… — Ricardo Hausmann, Santa Fe Institute

One accessible variable that can be considered at different rates is from the result of population densities as found between living conditions in rural verses urban population centers. It is a fundamental benefit of living in low density regions that disease progression will be at lower rates of intensity. The challenge is that there are no boundaries in this country between these different living conditions. People travel and intermingle, so progress that is made in rural environments may not be sustained when cross mingling is common with dense population centers. The dynamics are thus governed by a worst case environment.

A doubling in city size leads to more than twofold increase in variables including (rates of) disease. As a result, not only are there systematically more disease cases in larger cities but, equally important, their growth rate, like all socioeconomic urban phenomena, increases systematically faster. If the number of cases increases exponentially with time, then the rate parameter is predicted to systematically increase with city size. Consequently, a city of a million people will double the number of cases in approximately half the time as a city of 10,000. Cities are where the trade-offs are most acutely felt, and where the costs of trade-offs are most extreme. A means of addressing this trade-off is to invest heavily in urban infrastructure that can compensate for reduced physical contact. — David Krakauer & Geoffrey West, Santa Fe Institute

Such reversion to the worst case scenario resembles another impactful feature of origination. It remains a likely explanation for this and future outbreaks to have sources of origination from cross-species contact, particularly with bat species. The reason bat species are such a concern, besides mammalian characteristics allowing for cross species contamination, is their high density living conditions and short life spans, which result in a lollapalooza for evolutionary forces, especially when compared to the much slower rates of human evolution for response.

The implication of frequent zoonotic interactions is that the primary force driving the evolution of a pathogen is the shorter-lived host, which means that humans need to adapt to infection at a rate proportional to the evolution of the virus in a bat. It is the shorter-lived species that calls the shots on the progress of a disease. — David Krakauer & Geoffrey West, Santa Fe Institute

The future interplay across species will certainly need to be thought through if the origination of covid is eventually confirmed as through this channel. Consider that our biosphere is itself a complex system, with dynamics tied across species and regions. We don’t know what would be the end result without these bat populations, they may play a critical role for e.g. insect populations and their impact on agriculture. The risk is that if we intervene too heavy handily we might do more damage than we sought to prevent.

That brings me to another line of work we’re doing: it is trying to find indicators of resilience, indicators that show you when a system becomes brittle, even if you don’t know that system. The whole idea is based on the phenomenon that, if a system becomes more and more labile or more and more fragile — for the mathematicians, if it is approaching zero eigenvalue bifurcation — then you expect that when you perturb it a little bit, the recovery rate becomes slower and slower, and when the recovery rate approaches zero, then you’re close to the criticality. — Marten Scheffer, Wageningen University; Santa Fe Institute

Retaining societal robustness to shocks like a pandemic means we need to build in redundancies and be wary of over-optimizing. Optimization is itself a kind of system fragilizer by diverting resilience capacity to direct at known sources of perturbations and neglecting unknown unknowns, the proverbial black swans.

(Civilizations) collapse because as they become more complex, they become brittle and they fall down… they’re robust to perturbations, to shocks, that they see with some frequency, but they become fragile to perturbations that they never see because they’re constantly tuning themselves to optimize, and optimization drives that. — JD Farmer, University of Oxford; Santa Fe Institute

Evaluating and implementing non-pharmaceutical interventions are often tied to epidemic trajectory forecasting models, which by their nature are inherently uncertain. Given the precision that has been achieved for natural forecasts like weather patterns, it may be tempting to project such trust in models to this domain. An important distinction to keep in mind is that natural systems lack the feedback loops and interactions between scales that arise in human systems of this nature. And just because a forecast does not come to fruition consistent to the initial expected precision does not mean it was faulty at the time of origination. Human scale systems have transient distributions that are impacted by gaining knowledge of the very expectation that may serve as basis for targeted policy.

People conflate forecasting epidemiological models with forecasting the weather, but epidemiology involves human behavior and feedback and you change the situation when human behavior changes. If you say it is going to rain tomorrow and everyone carries an umbrella, that doesn’t change the fact that it’s going to rain. If you say there is going to be an outbreak tomorrow and everyone stays home from work, there’s no longer an outbreak, your prediction is wrong, and everybody loses faith in your model. So these things are not the same. — Caroline Bucket, Harvard University

A common fallacy when considering such models and their forecasts is to overweight the center of output and treat like a fully precise estimate. In machine learning practice, every application of inference produces a distribution of uncertainty bands surrounding the final reported value, sometimes not even symmetric. Just because a forecast is reported with decimal point precision does not mean that the model has anywhere near that level of certainty. Such certainty may only arise from the nature of the inspected application, the amount of training data, and the quality of the model. It is best reserved as a responsibility of the machine learning practitioner to quantify and clarify what can be anticipated for uncertainty surrounding a model, and such bands of uncertainty can only be expected to widen as the model and basis of training progresses in age.

…the great flaw of AI is that it builds its models of the future to a certain degree on what’s happened in the past, and that both embeds patterns and potential prejudices into the future but also runs a risk of potentially being upended by a sudden step change in conditions. (Which of course AI is supposed to be able to react to, but that’s one of the questions in any kind of model construction.)… How do we as human beings have an intelligent debate about uncertainty and about the fact that prediction is based around probabilities, not about certainty? … The Bank of England tried a few years ago to move from giving precise inflation forecast, which gives absolute numbers, to basically giving a vision of the future with core of trailing fans like the fan of an aircraft exhaust (“fan charts”), which showed different probabilities and potential outcomes, which is incredibly obvious and normal to anyone involved in statistics or modeling or predictions and stuff like that. But they were trying to communicate that to the wider public about levels of uncertainty, the error bar. … One of the attractions of AI is you have this magical divining machine — which is not that different from all the other divination tools that people have used in societies around the world — which gives you a prediction presented to the nearest decimal point… — Gillian Test, Financial Times

It is another easy fallacy to direct our attention and resources targeting known regions of interplay primarily because of familiarity and historic successes for adjacent applications. Consider the story of the man losing his car keys at night and searching for them in vain under the street light. When asked why he doesn’t look further down the road, he replies “it is too dark to see over there.” With the amount of brain power devoted to managing this pandemic, one can almost be assured that if you are not devoting attention wide into uncharted territory, you are merely duplicating the work of others.

If you want to hide something in broad daylight today, then the easiest way to do it is not to create a James Bond-style plot and bury it in the ground somewhere; it’s simply to wrap it up in acronyms and jargon and mathematics, and then it will be hidden in plain sight… in August 2007, an executive from PIMCO stood up on stage at Jackson Hole and used a phrase for the first time, which no one had ever heard of but which immediately caught on, which was “shadow banking”. And at a stroke, the world had a way to visualize and imagine what was happening in the financial system. It wasn’t acronyms any more. It was called shadow banking. They had what was basically a Copernican mental revolution. They went from thinking that all of this stuff… could be chucked into the margin, to realizing it was actually the center of finance and was actually driving the financial crisis. Shadow banking was essentially not so much the tail wagging the dog, it was the dog… — Gillian Test, Financial Times

The community of the Santa Fe Institute is working to consider each of these questions raised, and importantly what are the implications for how to govern in that context. I think the most important takeaway of the entire read was what a complex system optimized channel of governance might look like, and I certainly hope we see more along these lines from the institute in the future.

My colleagues and I have argued that one way forward is to develop a science of emergent engineering that emphasizes process over outcome and harnesses collective intelligence to identify problems on the horizon and temporary solutions to them as needed. This is an extremely preliminary idea, the crux of which is that we design our systems to have levers at multiple scales. Rather than controlling outcomes, the focus is on modulating fluidity by building systems with levers that facilitate or impede transitions. At the microscopic scale, levers include nudging and other forms of mechanism design to change behavioral strategies with the goal of obtaining “good-enough” collective output to solve the challenge at hand. At the mesoscale, we build levers that allow conditional changes in system states or regime shifts given crowd-sourced perception of environmental signals. … we might design levers that allow us to shift (temporally, ideally) between a regime known to work in a relatively predictable environment and, when the environment becomes uncertain or demands innovation, a regime that incentivizes exploratory behavior. — Jessica Flack, Santa Fe Institute

Books that were referenced here or otherwise inspired this post:

The Complex Alternative — David Krakauer & Geoffrey West

Book jacket: The Complex Alternative
The Complex Alternative — David Krakauer & Geoffrey West

As an Amazon Associated I earn from qualifying purchases.

For further readings please check out the Table of Contents, Book Recommendations, and Music Recommendations. For more on Automunge: automunge.com

--

--

Nicholas Teague
From the Diaries of John Henry

Writing for fun and because it helps me organize my thoughts. I also write software to prepare data for machine learning at automunge.com. Consistently unique.