Major positions on Conscious AI Systems

Daniel Estrada
4 min readJan 9, 2019

--

Going through the AAAI Spring Symposium papers on Conscious AI Systems, I find three major views being defended in the literature. I put together a graphic that visualizes the views and their relationships. Feedback appreciated. Full list of papers are here:

http://ceur-ws.org/Vol-2287/

Enactivism: cognition and consciousness emerge from the dynamics of biological complexity. Biological systems are positioned precariously in a changing world, and work to maintain their own autonomy (homeostasis, persistence, self-control) by means of adaptive sense-making, which is the biological basis of thought. Adaptive sense-making is characterized by feedback (control) loops both within an organism, and extending across the organism and its world.

Enactivism introduced here and here. See also enactive approaches to autonomy, AI, and subjective phenomenology.

Conference paper: Homeostatically Motivated Intelligence for Feeling Machines
Kingson Man, Antonio Damasio

Integrated Information Theory (IIT): consciousness emerges as a measure of integration or interdependence among the components of complex systems, as represented by a large positive value φ (phi). IIT is an axiomatic and quasi-scholastic in presentation, but might be (generously) understood as an abstraction and generalization of enactivism. IIT holds that consciousness is not necessarily biological, but that biological systems are particularly interdependent in ways that yield higher values of φ.

IIT 3.0 explained here and here.

Conference paper: Dissociating Intelligence from Consciousness in Artificial Systems — Implications of Integrated Information Theory
Graham Findlay, William Marshall, Larissa Albantakis, William Mayner, Christof Koch, Giulio Tononi

Free Energy Principle (FEP): cognition is best understood as a process for minimizing error in cognitive models, what is sometimes called predictive coding. The term “free energy” is by analogy to thermodynamic processes that minimize free energy. FEP uses Bayes theory, and models cognitive processes as a Markov blanket that updates when its predictions fail. This approach has yielded some impressive, neurologically accurate models of cognitive processes like visual neglect. These models are realistic enough to accurately reproduce the particular effects that different kinds of brain legions have on eye movements.

FEP explained in detail here. The model of hallucination in FEP is here. The model of visual neglect in FEP is here. See Andy Clark’s discussion of free energy and predictive coding here and here.

Conference paper: Disorders of Artificial Awareness
Thomas Parr, Danijar Hafner, Karl J. Friston

I was familiar with all these views before the conference papers went online, but I had spent the most time with enactivism. My submission to the conference takes a computer science perspective on enactivist theory. Still, before the conference I did not really conceptualize the field in terms of these theories and their relationships. These theories did not exist (at least, in this form) when I was doing research in grad school. If you had asked me last summer to discuss the major theories of consciousness I would have talked about Block, Chalmers and Nagel, qualia, bats, and Mary, not the collection of views in this diagram. These recent changes to the theoretical landscape are exciting and promising. It feels like the start of progress.

Anyway, today I see Dan Dennett packing away his computer after the Jerry Fodor memorial at #APAEastern19, and I approach to ask his opinion on this field of views, which I name quickly. His immediate response was that only one of the views was serious. For a moment I didn’t know which one he meant!

He clarified first that he thought IIT was a lot of “smoke and mirrors”. I responded “I agree with you, but there *is* a lot of smoke”.

He then said he liked enactivism best, and that it had potential to be worked out more fully as a theory of embodied consciousness. Before clarifying further, he asked if the workspace view of consciousness (“fame in the brain”, Dennett’s own view from Consciousness Explained) was represented in the conference. I said that I hadn’t yet read everything, and while I did see his views mentioned they were not engaged to the degree of these major positions. I also suggested quickly that perhaps a workspace view could be worked out within the enactivist framework of “adaptive sense-making”. His body language was ambivalent on this point, but it seemed to me like he took a pause to think about it.

Finally, he said that he liked the theory of predictive coding, but he felt that some of FEP’s clams are overstated, and others don’t reach as far as enactivism. Then he qualified that he had some Friston articles on his desk waiting to be read, so his rejection of the view wasn’t fully informed or conclusive. He suggested it’s possible that FEP has legs, but he doesn’t know yet.

I said thanks and left. The entire encounter lasted less than 4 minutes.

I’ve identified myself with Dennett’s camp for as long as I’ve been studying the philosophy of mind. I can practically hear his voice in my head. I had an extended encounter with him over a decade ago, but we’re strangers, and this exchange did not hinge on some personal recognition.

Instead, I just slipped casually into a fast-paced, highly technical discussion of cutting edge research in the philosophy of mind with a living legend in the field and my own personal anchor for orienting myself within the literature.

It was as close to a conversation with The Architect as I think I’ll ever get.

--

--