The Material Limitations of AI

The belief that development of a sentient or self-aware AI is simply a matter of enough data, connections, and processing speed is based on the premise that even human consciousness is the product of material objects and processes.

Francis Crick, the less overtly racist half of the duo who discovered DNA’s double helix, published a book in 1994 called The Astonishing Hypothesis that proposed that consciousness, or a “soul,” results from the actions of physical cells, molecules, and atoms.

It’s a reasonable proposition, since we can only measure the material world, so everything must be a product of it. Bodies obey the same physical laws as rocks and weather patterns. If something defies explanation, it’s only because we don’t have enough information yet.

Just as a mind is the product of a brain, AI is the outcome of a computer. Any nagging questions are just details, Mr. Descartes, not a debate.

Only they’re not.

We can’t explain consciousness. Neurobiologists can describe it, and make assumptions about whether it’s the result of vibrations from the brainstem (thalamocortical rhythms), the instructions from a prehistoric virus (Arc RNA), or only a “user illusion” of itself (Dan Dennett’s molecular machines).

But they can’t say what it is, exactly. How there’s a you or me to which we return every morning, though all along we were and always will be ourselves.

Similarly, they can describe that our brains control everything from muscle movement to immune system health, and both where and when they capture sensory information.

But we haven’t got the faintest idea how our minds do it…how that ephemeral thing called consciousness issues commands to flex muscles, secrete hormones, or remember a friend’s face or a song.

It gets even weirder when you consider the vagaries of quantum physics, which rely on consciousness as the mechanism for pulling elementary particles out of a hazy state of probable existence into reality. Consciousness literally creates the material world through the act of perception (it’s called the collapse of the wave function).

So consciousness emerges from the material universe it creates? No wonder it’s called the hard problem (my favorite living playwright Tom Stoppard wrote about it).

We don’t need to solve that problem in order to invent incredibly capable AI that can autonomously learn and make increasingly complex decisions. Chips in coffee makers are “smart,” technically, and AI that can mimic human behaviors is already in use in online service chatbots. There’s no obvious limit to such material functions.

But I don’t think a machine is going to stumble on actual consciousness, or sentient agency of action, before we figure it out for ourselves.