Check your U.S. voter registration status or register to vote here.

Trump is a Thermometer, not a Thermostat, and How Early AI Systems Modeled Politics

Sherol Chen
Oct 25, 2016 · 5 min read

I was in a gospel choir in a past life, and the preacher made a memorable analogy: “We are not thermometers, we are thermostats.” To me, that meant that we aren’t confined to detecting temperature; rather as active agents in the world, we are meant to be climate shifters. The novelty of Donald Trump, however, is less about being an agent of change. His ability to reveal certain mindsets and behaviors of our nation mimics the early Artificial Intelligence pursuits of the 1960's.

Image for post
Image for post
Politicians Barry Goldwater and Donald Trump

The Goldwater Machine

The birth of the term Artificial Intelligence (AI) happened at Dartmouth College in 1956. In 1963, a very optimistically titled book, “Computer Simulations of Personality,” was compiled of a collection of academic papers by psychology theorists. Among those scholars was Robert P. Abelson, whose work has had foundational impact to both AI and Cognitive Science as well as Social Psychology and Political Science.

Abelson theorized that human reasoning was influenced by an additional dimension of factors, directed by our emotions. He named his theory “Hot Cognition,” in contrast to Cold Cognition, where our processing of information is independent of our feelings. You could say that Hot Cognition may be less objective, less factual, and less rational in comparison.

Abelson formalized Hot Cognition as a function of beliefs:

These theories became the basis of what was known as the Goldwater Machine. Noah Wardrip-Fruin revisits this work in his book, Expressive Processing, describing the concept of Goldwater from it’s inspiration:

Towards simulating Hot Cognition, Abelson and his colleague, J. Douglass Carroll, continued to work on the powerful and steadfast aspects of our minds, namely, ideology, rationalization, and bias. The diagram below is a model of human rationalization, drawn by Abelson in 1963, to caricaturize our desire to be “right” or “good.” Given a situation that contradicts our belief system, we are confronted with “the apparent necessity of changing one or more beliefs.” Our resistance to this change can be formalized as “rationalization,” illustrated in the diagram below.

Image for post
Image for post
A diagram taken from Abelson’s 1963 essay on Hot Cognition

Many subsequent AI systems reference the Goldwater Machine as an early example of computerized storytelling. Subsequent systems, like Jaime Carbonell’s POLITICS and Michael Mateas’s Terminal Time, would be based off of this work. Mateas summarized the Goldwater Machine’s functions, as follows:

Donald Trump, like Goldwater, practices storytelling with such conviction to these ideological functions — it’s simplicity is demonstrated with the few boxes and arrows, it’s efficacy is apparent in how universal this model is understandably applied.

This comparison isn’t meant to trivialize the reasoning abilities of conservatives, but that we are all susceptible to simple and predictable patterns of behavior regardless of political affiliation. In working towards preserving our belief systems, we create narratives which maintain that sense of security. Hot Cognition can therefore be seen as a glue that holds our beliefs in place, reinforced by our abilities to tell satisfying stories to ourselves.

As represented by the Goldwater Machine, its goal was not to change the state of reality, but to readily have the tools to exploit storytelling conventions. Donald Trump’s appeal is as much, if not more, of his ability to reinforce the personal narratives of a large population of people. We can conclude that: First, storytelling is foundational to our decision making. Second, our inability to see our own exploits stifles our ability to understand people who have vastly different stories. Finally, in building and having built these AI systems, we uncover the ways that we exploit ourselves through persuasion and propaganda.

As much as Donald Trump represents change, he more-so represents the status quo. We are likely not more ignorant or racist than previously; it’s the internet and social media that make us more likely to come across those racist and bigoted beliefs. Rather than accuse and condemn the narratives that offend us, maybe we could figure out how to overcome the easy exploits of the overly simplified rhetorical models to which we adhere.

The Eliza Effect

ELIZA was a chatbot developed in 1966.

Sherol Chen

Written by

AI, Games, and Education

The Eliza Effect

ELIZA was a chatbot developed in 1966. The ELIZA Effect is the tendency to unconsciously assume computer behaviors are analogous to human behaviors. Here you’ll find articles on Artificial Intelligence, Machine Learning, Believability, and Procedural Thinking.

Sherol Chen

Written by

AI, Games, and Education

The Eliza Effect

ELIZA was a chatbot developed in 1966. The ELIZA Effect is the tendency to unconsciously assume computer behaviors are analogous to human behaviors. Here you’ll find articles on Artificial Intelligence, Machine Learning, Believability, and Procedural Thinking.

Medium is an open platform where 170 million readers come to find insightful and dynamic thinking. Here, expert and undiscovered voices alike dive into the heart of any topic and bring new ideas to the surface. Learn more

Follow the writers, publications, and topics that matter to you, and you’ll see them on your homepage and in your inbox. Explore

If you have a story to tell, knowledge to share, or a perspective to offer — welcome home. It’s easy and free to post your thinking on any topic. Write on Medium

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store