Christof Koch on “The Feeling of Life Itself” and how new technology may allow us to see consciousness for the first time

MIT Press
6 min readOct 17, 2019

--

On the publication of his new book, the President and Chief Scientist of the Allen Institute for Brain Science in Seattle, describes his new theory of consciousness and how it might apply to animals, computers, and more.

The MIT Press: Why should anybody pick up this book and read it? What is it about?

Christof Koch: Well, it’s for anyone interested in where the sounds and sights in the skull-sized infinite kingdom that is their mind comes from, be they a doctor, school teacher, therapist, computer programmer, technician or scientist.

I experience the world — I see a verdant forest, enjoy the delectable taste of Nutella, am in love or am upset. That’s what consciousness is: any experience, no matter how banal or exalted. But how it is that a physical organ like the brain can give rise to feelings? That seems distinctly odd. Nothing in physics, chemistry or biology prepares us for this seemingly miraculous fact — that certain physical systems can have inner states. That’s the beating heart of the ancient mind-body problem.

MITP: But as you point out, this question has been asked for thousands of years. An endless succession of philosophers, starting with the ancient Greeks until right up to today, all claim that they have the answer, although no one else seems to agree. So what is new now?

We can detect and track the footsteps that any conscious experience leaves in the brain.

CK: The difference is that until recently, the only tools to explore the mind were introspection, a relative barren enterprise given that so much of the mind is inaccessible to consciousness, and philosophical armchair speculations. But now there is a growing science of consciousness. We can detect and track the footsteps that any conscious experience leaves in the brain. This is the modern quest for the neural correlates of consciousness, a research program that the molecular biologist and co-discoverer of the double-stranded nature of DNA, Francis Crick, and I articulated many years ago. Much progress has been achieved since then in understanding what regions of the brain are responsible for generating experience, and how the waxing and waning of consciousness over the course of the day and the night is reflected in the underlying brain activity.

MITP: Sure, I believe that neuroscientists can do that. But so what? How is that different from pointing to the pineal gland, as René Descartes famously did, and arguing that there’s where the ineffable mind encounters the brain.

CK: Well, we now have a proper theory of consciousness in hand that relates specific neural circuits to specific aspects of any one experience. The book describes the Integrated Information Theory (IIT), a quantitative, rigorous, consistent and empirically testable theory that starts with experience and proceeds to the underlying neuronal mechanisms. “Integrated information” is a mathematical measure quantifying how much any system, no matter how simple or how complex, is determined by its past state and how much it can influence its future, its intrinsic causal power. Any system that has this potential is conscious. The larger the system’s integrated information, referred to by the Greek letter phi (pronounced fi), the more conscious the system is. If something has no causal power upon itself, such as the neural networks that underlying machine learning, its phi is zero. It doesn’t feel like anything to be this thing.

MITP: This seems as speculative as theoretical physics speaking about superstrings and other hypothetical stuff making up the universe that can’t be empirically tested.

How low does consciousness go in the animal kingdom — what about crows, octopuses, flies or worms?

CK: Unlike superstrings, the welter of ideas derived from IIT can be tested today: Where in the brain does consciousness arise (answer — in the back of the neocortex)? Only in the brains of humans (extremely unlikely)? What about creatures such as elephants, whales or dolphins with bigger brains than us or monkeys, dogs, mice and other mammals with smaller brains that are, however, similar to ours (they too experience the world)? How low does consciousness go in the animal kingdom — what about crows, octopuses, flies or worms (we don’t know but based on the complexity of their brain, the theory imputes some conscious experience to them)? What happens if the brain is cut into two, as in a split brain surgery to alleviate epileptic seizures (two conscious minds co-exist within the same skull, each one within its own cortical hemisphere)? What happens if two brains are connected together artificially (the two conscious minds would merge into one)? How active is the brain of a long-term meditator in a state of “pure experience” (possibly very little)?

MITP: Does this theory have practical consequences?

CK: Very much so. The theory lets you build a consciousness-meter, a practical device that can tell whether patients unable to signal by speech, hands or eye movements, either because they are anesthetized or because their brain is severely injured (such as persistent vegetative, minimal conscious or locked-in state patients) are conscious or not. This method, dubbed zap-and-zip, is now being evaluated at a number of clinical centers in the US and in Europe.

MITP: What about the burning question of our times — could appropriately programmed computers be conscious? Could Alexa or Siri 10.0 feel like something?

CK: No. Despite the near-religious belief of the digerati in Silicon Valley, most of the media and the majority of Anglo-Saxon computer and philosophy departments, there will not be a Soul 2.0 running in the Cloud. Consciousness is a not a clever hack. Experience does not arise out of computation.

The dominant mythos of our times, grounded in functionalism and dogmatic physicalism, is that consciousness is a consequence of a particular type of algorithm the human brain runs. According to integrated information theory, nothing could be further from the truth. While appropriate programmed algorithms can recognize images, play Go, speak and drive a car, they will not be conscious. Even a perfect software model of the human brain will not experience anything, because it lacks the intrinsic causal powers of the brain. It will act and speak intelligently. It will claim to have experiences, but that will be make-believe. No one is home. Intelligence without experience.

Consciousness is fundamentally about being, not about doing.

That’s the difference between the real and the artificial. A supercomputer simulating a rain storm won’t cause its circuit boards to become wet. Nor will a computer simulating a black hole twist and warp space-time around its chassis; you won’t be sucked into its simulated massive gravitational field. It’s the same with consciousness — clever computer programming can simulate the behavior that goes hand-in-hand with human level consciousness but it’ll be fake consciousness.

That’s not to say there is something magical about brains; they are a piece of furniture of the universe like any other. However, brains are by far the most complex chunk of active matter in the known universe. A computer could acquire human-level consciousness but it would have to be built in the image of the human brain, including its vast complexity, so called neuromorphic computers.

Consciousness is fundamentally about being, not about doing. This has a host of scientific, philosophical and ethical consequences.

Christof Koch is President and Chief Scientist of the Allen Institute for Brain Science in Seattle, following twenty-seven years as a Professor at the California Institute of Technology. He is the author of The Feeling of Life Itself: Why Consciousness Is Widespread but Can’t Be Computed (The MIT Press, 2019).

--

--

MIT Press

Visit the MIT Press Reader at https://thereader.mitpress.mit.edu to read thought-provoking excerpts, interviews, and other original works.