Ritual and Non-Human Intelligences, or Towards a Daemonology of the Future

Ian Matthew Miller
Roots and Branches
Published in
10 min readMar 22, 2016

There has been a lot of talk of Artificial Intelligence in the popular press recently. A lot of this stems from the 4–1 defeat of Lee Sedol — arguably the world’s best (human) Go player — by AlphaGo, an AI built by Google. Go was considered a final frontier for human-AI competitions, a type of game almost perfectly designed to play to the strengths of human intelligence and the weaknesses of computing. It is a game of so many possibilities it is physically impossible for the computer to run through them all, at least with present computing technology (there are not enough atoms in the universe), so it relies on pattern recognition and intuition — things that computers have (until quite recently) been very bad at, but that come naturally to humans (and to animals more generally).

Computers beat the best humans at chess in the 90s, largely through sheer computational power, but it was expected to take at least another decade before they could beat humans at Go. Commentators have been shocked at both the speed with which deep learning was able to catch up with human intuition, and the way in which the AI played, which was called both inhuman and beautiful. But beating humans at games is essentially just a test of the capabilities of AI; it is not really the focus of most efforts, hopes, and fears. AI is making its way into searches, robotics, and lots of other areas as well — areas with substantially more potential to change the human-technology experience in ways both good and ill. These possibilities have prompted technologists like Elon Musk to take steps to “save the world” from the potentials for destructive AIs, or AIs used for destructive purposes, largely by spreading the technology as widely as possible.

The current line of thinking about AI is that development is a matter of finding more good technologists to develop the spinal column — the hardware and software of machine cognition — and more good data — the key stimuli to develop the neural networks that make deep learning possible. There are some roles for science fiction writers, philosophers of the mind, and futurists of various stripes to add comment or contributions to the development of AI, but this has been construed as a largely technological endeavor for STEM types.

This is an area where I beg to differ. While cognitive science, computation, and hardware development are the necessary backbone for developing AI, I would argue that they are not the right tools for thinking about the possibilities, perils, or ways of controlling AI. Instead, I will suggest that history, philosophy, and religious studies — as well as biology — are the areas where we have done the most thinking about non-human intelligence, and they provide us with a distinct set of tools for approaching the particular problems of non-human agents.

Since the dawn of cognitively modern humans, we have been confronted with non-human forces that have the potential to massively impact our lives. Some of these are relatively familiar and behave in fairly predictable ways. Many species of plant and animal life display their own forms of agency, but respond in fairly regular ways to human intervention. The ones that are most responsive to us were “domesticated.” Domestication was a process that changed the nature of non-human flora and fauna — and the human interlopers — at genetic, epigenetic, and behavioral levels. These domestications are one model for dealing with AIs.

But many of the most important non-human forces in the world — including weather, fire, disease — are extremely erratic in their behavior. So too are many of the most important social forces — economics, politics, societies that are composed of many humans, but that exhibit behaviors that cannot be explained as the simple sum of their component parts. These non-human agents exhibit what Benoit Mandelbrot called “higher states of randomness” — their long-term baselines or medians are poor predictors of their extreme behavior (I’m currently obsessed with this concept). Or what others have called “chaotic behavior” — sensitive dependence on initial conditions that makes it impossible to fully predict future behavior based on initial measurements. There are computational approaches based on the mathematical description of chaos and high states of randomness. But there are also much older interventions.

Long before we developed mathematical models of non-human agents, we described them as spirits. Every early society recognized spirits representing the forces of weather, of fire, of plants and animals, of death and disease. As societies grew more complex, they developed complex pantheons of gods, not only representing non-human forces, but also epi-human or social forces: patron gods of cities, clans, professions. This served as a recognition of the powers of these forces over human lives, generally in ways outside of human control.

So let’s start here: it strikes me that the early human experience with artificial intelligence is likely to be a lot like the early human experience with other forms of non-human intelligence (or at least non-human agency) — higher animals, weather, fire, disease.

Furthermore, I think that modeling non-human agents as gods leads to useful interventions. Worship of these spirits was not just a recognition of their powers to affect the human experience. Ritual worship was a way of domesticating spirits, the same way that we domesticated plants and animals. In fact, it is probably better to speak of ritual as the mode through which humans domesticated both the flora and fauna of the physical world, and the ghosts and spirits of the metaphysical world. Let’s turn to ritual for a brief theoretical aside.

I have been reading some of Michael Puett’s work on ritual, and I think that it is very topical here. Puett is most interested in the use of ritual in disciplining relationships between humans, and between the human and ex-human (i.e. ghosts and spirits). In my understanding, he explains ritual as follows: in our interactions with each-other, we recognize certain behaviors that tend to elicit positive responses; we repeat these behaviors in the hope of eliciting similar positive responses; oft-repeated behaviors ultimately become rituals that condition both our behavior and our outlook.

In classical Chinese thought there is a dichotomy between law (fa 法) and ritual (li 禮). This is perhaps not as clear a dichotomy as is often portrayed — rules penetrate the world of ritual, and the exercise of law is highly ritualized. Nonetheless, law and ritual are useful as ideal types. Law means a system of precise rules, supported in many cases by rewards and punishments. The problem with rules is their tendency to be both under-precise and over-precise. Even a long list of rules leaves out some circumstances, while other circumstances are treated in ways that are out of line with our expectations. This is, in fact, the exact problem that algorithmic approaches run into — that a list of rules, however complex, cannot account for every circumstance. In other words, this is the problem with algorithms that deep learning is supposed to resolve.

Ritual (as an ideal type, not as real-world ritual), is something very different: a system of norms governing outlooks and predispositions. Ritual governs the intuition that leads to decisions, rather than directly governing the decisions themselves. Ritual is thus a way of encoding behaviors that tend to elicit positive responses as the default behaviors, but leaving the possibility of adaptation, as well as the possibility of choosing a response that differs from the default. This is a possibility that is absent from law (as an ideal type). I will argue that this allows ritual to adapt — to learn and change through intrinsic processes, while law can only be modified by external processes.

In other words, ritual is essentially an evolutionary process. It starts with random variations in behavior. Unsuccessful behaviors are likely to be selected against, while successful behaviors are likely to be selected to continue. The most consistency successful behaviors become ritual norms while the most consistency unsuccessful behaviors become taboos. Yet because the mode of development is random, the behaviors that become ritual are not necessarily the best possible options. Ritual will tend to include a lot of junk — in particular, it includes vestigial behaviors that were once useful, but are no longer useful; and spandrel behaviors that have no particular utility, but are included as byproducts of other behaviors that do have use.

Yet in Puett’s formulation, ritual does not just select the behaviors that tend to elicit the best responses, it further conditions us to attach positive responses to certain behaviors. Ritual shifts the behaviors of others by eliciting changes in their outlook; but it also shifts the orientation of the self. By conditioning both the performer of the ritual and the receiver of the ritual, it makes already-successful behaviors even more powerful. It both selects for effective behavior, and reinforces certain responses to this behavior.

At least that is how ritual works between two human agents.

I would extend this theorization to argue that ritual works similarly between humans and other biological agents. We domesticate plants and animals by: choosing our behaviors that most effectively control the flora or fauna in question; and by reinforcing plant and animal behaviors that respond well to these stimuli. We can, in fact, observe the effects of domestication in the genetics and behaviors, at both the level of the species, and the level of the individual. But it is the ritual intervention — the selection of the most successful human behavior — that precedes the reinforcement of certain traits in non-human agents.

In fact, I would argue that ritual works similarly between humans and non-biological agents as well. Fire was our first domestication — before plants and animals — and we domesticated it in the same way. First we encoded successful human behaviors as ritual. Ultimately, these rituals conditioned the non-human response as well. In the case of fire, we conditioned its response by gradually changing the availability of fuel in the landscape. Fire was (and is) still unpredictable, but both better human behavior, and a changed environment, make fire far easier to control. We [used to] do this with climate as well, by choosing sites for human habitation that are not susceptible to the extremes of weather, and by transforming these sites to further dampen the effects of extreme weather. Pre-modern medicine was similar — selecting interventions that helped treat diseases, encoding them as ritual, and ultimately conditioning diseases to respond in predictable ways to these interventions.

We did this with social formations as well, creating rituals that select for useful (or at least non-couterproductive) individual interventions, and also conditioning collectives to respond in positive, predictable ways to these stimuli.

Even in cases where we do not believe that the non-human forces actually respond to our interventions, the rituals have positive effects by nudging our cognitive and emotional processes in positive ways. This, for example, is how we might think of death ritual. It doesn’t actually change the response of the dead, but it conditions us to have certain useful emotional processes in response to the passing on of a loved one. Puett calls this a process of domestication, whereby ghosts — out-of-control spirits — are turned into ancestors — spirits that we have ways of relating to.

If we extend the domestication metaphor (and I think it is actually more than a metaphor), early human societies turned the unruly spirits of nature into relatively controllable gods. This is not to say that weather, or fire, or wild animals, or diseases, always behaved in predictable or salutary ways. It is simply to say that rituals gave a means to reduce the possibilities of negative outcomes and increase the possibilities of positive ones. In the Chinese tradition, divination of the will of gods does not have “good” or “bad” outcomes, but “auspicious (ji 吉) or “inauspicious” (xiong 凶) outcomes — possibilities, but not absolutes.

So how does this relate to AI? Quite simply, AIs are an unfamiliar and potentially powerful form of non-human intelligence, of non-human agency. They are likely to behave in ways that are unpredictable and scary. But we have dealt with unpredictable and scary non-human agents in the past. We tamed fire, climate, plants and animals through ritual — through selecting for human behaviors most likely to elicit positive responses; and through conditioning both our behaviors and those of these non-human agents. This did not prevent occasional disaster, but it limited the frequency and probability of disaster. It turned unruly demons into domesticated gods. Gods still struck us with lightning from time to time, but we at least had a theory of why — unlike demons that hurt us in ways that seemed entirely random and malicious.

Based on this line of logic, I think we can begin to create a daemonology of future non-human intelligences. We don’t know what exactly they will look like, but that doesn’t particularly matter. We can parse them into rough categories of behavior:

Demons that behave destructively and respond poorly or unpredictably to human interventions.

Gods — demons that have been conditioned to respond in relatively predictable and salutary ways to human ritual.

Ghosts — beings that have lost their original raisons d’etre, yet continue to float in the ether attempting to pursue their previous tasks.

Ancestors — ghosts tamed by being ritually attached to the institutions they helped birth, even if their spirits no longer have immediate function.

Minor spirits attached to particular places or functions that occasionally do unpredictable things.

Dragons — among the most powerful and unpredictable spirits, associated with basic functions of the environment. They occasionally lash out in dangerous ways but are otherwise neutral, uncaring about human lives, or even benevolent.

In all of this, the process of ritual is key to the task of taming non-human agents to respond well to human actions. It starts by selecting and conditioning ourselves to behave in useful ways. And if our prior experiences with non-human agents teach us anything, it is that these forces will themselves be conditioned to respond in more predictable and positive ways to these very behaviors, these rituals.

As a final note: this mode of ritual adaptation is somewhat empirical but it is not particularly scientific. It rests upon observation and selection of successful interventions, but does not build upon hypothesis-testing or repeatable experimentation. The theoretical implications of this distinction are far more than I want to (or know how to) get into here, although I may return at a future date.

Yet I do want to make a case for this type of empirical adaptation: chaotic phenomena, high-state random phenomena are not repeatable. In the same way that we cannot brute-force Go the way we can brute force chess, exceedingly complex phenomena of the real world cannot be described by exact laws the way simple phenomena can. Complex phenomena cannot be repeated in the laboratory. At best, the can be approximated probabilistically based on simulation. This means that ritual adaptation is, at least initially, and perhaps with the aid of simulation, our best bet in responding to complex phenomena that present as non-human agents.

--

--

Ian Matthew Miller
Roots and Branches

Professor @StJohnsU, historian of #China, early modern enthusiast, #dh dabbler.