The Eliza Effect
Published in

The Eliza Effect

Trump is a Thermometer, not a Thermostat, and How Early AI Systems Modeled Politics

I was in a gospel choir in a past life, and the preacher made a memorable analogy: “We are not thermometers, we are thermostats.” To me, that meant that we aren’t confined to detecting temperature; rather as active agents in the world, we are meant to be climate shifters. The novelty of Donald Trump, however, is less about being an agent of change. His ability to reveal certain mindsets and behaviors of our nation mimics the early Artificial Intelligence pursuits of the 1960's.

Politicians Barry Goldwater and Donald Trump

The Goldwater Machine

The birth of the term Artificial Intelligence (AI) happened at Dartmouth College in 1956. In 1963, a very optimistically titled book, “Computer Simulations of Personality,” was compiled of a collection of academic papers by psychology theorists. Among those scholars was Robert P. Abelson, whose work has had foundational impact to both AI and Cognitive Science as well as Social Psychology and Political Science.

Abelson theorized that human reasoning was influenced by an additional dimension of factors, directed by our emotions. He named his theory “Hot Cognition,” in contrast to Cold Cognition, where our processing of information is independent of our feelings. You could say that Hot Cognition may be less objective, less factual, and less rational in comparison.

Abelson formalized Hot Cognition as a function of beliefs:

The theoretical problem may be posed as follows: Is it possible to specify a realistic model for attitude change and resistance to change in sufficient process detail so that a computer could simulate it? The major focus here is on cognitive processes, but I propose to explore some limited relations between cognition and affect. I would not want to claim a general theory of cognition and affect. However, within the context of attitudes and attitude changes, one might hope to develop a simulation model which would do for hot cognition what others have done for cold cognition.

One might even speak of attitudinal problem-solving, wherein the individual is confronted with a challenge to his belief system and the “problem” he must solve is, “What am I to believe now?” (Abelson, 1963)

These theories became the basis of what was known as the Goldwater Machine. Noah Wardrip-Fruin revisits this work in his book, Expressive Processing, describing the concept of Goldwater from it’s inspiration:

The world seemed polarized to many and, within the United States, names like those of Adlai Stevenson and Barry Goldwater did not simply indicate prominent politicians with occasionally differing philosophies. Goldwater, the Republican nominee for president of the United States in 1964, was an emblematic believer in the idea that the world’s polarization was an inevitable result of a struggle between good and evil. (Wardrip-Fruin, 2009)

Towards simulating Hot Cognition, Abelson and his colleague, J. Douglass Carroll, continued to work on the powerful and steadfast aspects of our minds, namely, ideology, rationalization, and bias. The diagram below is a model of human rationalization, drawn by Abelson in 1963, to caricaturize our desire to be “right” or “good.” Given a situation that contradicts our belief system, we are confronted with “the apparent necessity of changing one or more beliefs.” Our resistance to this change can be formalized as “rationalization,” illustrated in the diagram below.

The rationalization mechanism, on the other hand, has three methods of dealing with upsetting statements — each of which represents a different way of denying the psychological responsibility of the actor for the action. They are:

by assigning prime responsibility for the actor to another actor who controls the original actor; by assuming the original action was an unintended consequence of some other action truly intended by the actor; by assuming that the original action will set other events in motion ultimately leading to a more appropriate outcome. (Wardrip-Fruin, 2009)

A diagram taken from Abelson’s 1963 essay on Hot Cognition

Many subsequent AI systems reference the Goldwater Machine as an early example of computerized storytelling. Subsequent systems, like Jaime Carbonell’s POLITICS and Michael Mateas’s Terminal Time, would be based off of this work. Mateas summarized the Goldwater Machine’s functions, as follows:

The Goldwater Machine mimicked the responses of conservative presidential candidate Barry Goldwater to questions about the Cold War. (Mateas, 2000)

Donald Trump, like Goldwater, practices storytelling with such conviction to these ideological functions — it’s simplicity is demonstrated with the few boxes and arrows, it’s efficacy is apparent in how universal this model is understandably applied.

This comparison isn’t meant to trivialize the reasoning abilities of conservatives, but that we are all susceptible to simple and predictable patterns of behavior regardless of political affiliation. In working towards preserving our belief systems, we create narratives which maintain that sense of security. Hot Cognition can therefore be seen as a glue that holds our beliefs in place, reinforced by our abilities to tell satisfying stories to ourselves.

As represented by the Goldwater Machine, its goal was not to change the state of reality, but to readily have the tools to exploit storytelling conventions. Donald Trump’s appeal is as much, if not more, of his ability to reinforce the personal narratives of a large population of people. We can conclude that: First, storytelling is foundational to our decision making. Second, our inability to see our own exploits stifles our ability to understand people who have vastly different stories. Finally, in building and having built these AI systems, we uncover the ways that we exploit ourselves through persuasion and propaganda.

As much as Donald Trump represents change, he more-so represents the status quo. We are likely not more ignorant or racist than previously; it’s the internet and social media that make us more likely to come across those racist and bigoted beliefs. Rather than accuse and condemn the narratives that offend us, maybe we could figure out how to overcome the easy exploits of the overly simplified rhetorical models to which we adhere.



ELIZA was a chatbot developed in 1966. The ELIZA Effect is the tendency to unconsciously assume computer behaviors are analogous to human behaviors. Here you’ll find articles on Artificial Intelligence, Machine Learning, Believability, and Procedural Thinking.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store