Recently, I’ve been spending a lot of time thinking about AI. There’s a lot of hype, excitement, and angst in the press about AI’s advancement. It’s understandable because the technology is quickly seeping into every corner of modern life, present in everything from Autonomous Vehicles to I-phone’s Siri. As AI automates repetitive tasks, adds intelligence to existing products, achieves impossible accuracy, and adapts through progressive learning it will become the most important technological phenomena of the 22nd century — second, perhaps, only to the Blockchain.
So, what is AI? The exact definition of AI is hotly debated and there are already many fantastic explanations of AI on the internet, so I won’t dive in too deeply. But broadly speaking, AI is advanced statistics and applied mathematics which harnesses new advances in computing power and the explosion of available data to give computers new powers of inference, recognition, and choice.
Machine learning (ML), the most promising subset of AI, is a field that aims to teach computers to learn from examples (or “Data”) and perform a task without being explicitly programmed to do so. At its most basic, ML uses algorithms to parse data, learn from it, and then make a decision or prediction about something in the world. Rather than hard-coding software with specific instructions to accomplish a particular task, a machine is “trained” using large amounts of data and algorithms that give it the ability to learn how to perform a task or predict an outcome.
Deep learning, the most successful approach within machine learning, is loosely modelled on the brain’s “neural networks”. In a deep learning mesh, you have “neurons” which have discrete layers and connections to other “neurons” — much like the neurons in our own brains do. Each layer of neurons picks out a specific feature to learn, for example the colour of a cat, and it’s this layering that gives deep learning its name.
Other approaches to machine learning include decision tree learning, inductive logic programming, clustering, reinforcement learning, and Bayesian networks — all familiar statistical approaches.
In the coming years, machines will continue to get smarter and make more complex decisions from unsupervised deep learning. Many, like Elon Musk, fear that as the intelligence of the machines grows, the transparency of the machine’s decisions will dim and that AI could take on a life of its own. Without question within a decade, the computational abilities of the more advanced machines will resemble that of a toddler. Alan Turing, the inventor of the first rudimentary computer, argued that if we can’t differentiate a machine from a human counterpart then we are justified in calling such a machine, intelligent. The question then that we will soon be confronted with is the following: if a machine is intelligent does it follow that it is conscious?
The nature of consciousness is one of the thorniest questions in philosophy and has confounded scientists and philosophers for generations. The word conscious comes from the Latin word conscius (con- “together” and scio “to know”), which can be translated to “having joint or common knowledge with another.” Generally, humans seem to share a broad intuitive understanding of what consciousness is. But when you get into the particulars, the questions multiply. What is more conscious, a fish or an ant? Do plants have some sort of consciousness? How does consciousness arise, like something out of nothing?
Given that scientists and philosophers have not yet devised ways to measure or (dis)prove consciousness, talking about consciousness can be very difficult. The Descartes notion of dualism — that the mind and body are separate things — has long since receded from science. The philosophical and scientific consensus is now that there is no nonphysical soul. A definition of consciousness that many philosophers accept was proscribed by Thomas Nagel. In his essay What It Is Like To Be A Bat, he wrote that consciousness must have a subjective character, “a what it is like aspect”, and a qualitative perspective on the world. “An organism has conscious mental states if and only if there is something that it is like to be that organism”, he argued, and so consciousness cannot be explained without the subjective character of experience.
Even if we accept Nagel’s definition as a starting point, we still need to explain how consciousness arises out of the physical world. Consciousness involves accepting new information, storing and retrieving old information and processing it all into perceptions and actions. These forms of thinking, memory, and attention are all forms of neurological computing. Neuroscience explains the biology and chemistry behind these neurological processes with ever greater granularity and accuracy. But why these processes feel like something, something qualitative— that’s a hard problem nobody has yet succeeded answering. Philosopher David Chalmers coined this the hard problem of consciousness. How do we explain the relationship between the physical processes of the brain and the qualitative nature of experience? Below are a few of the more popular contemporary explanations of consciousness, which try to bridge this gap of explanation:
- Material View of Consciousness: Some thinkers have posited the theory of emergence, which is the idea that if you hook up enough non-sentient components (neurons, microchips) then consciousness will appear. Simply put, they believe that out of unconscious complexity arises consciousness. In humans, for instance, consciousness would arise from physical states and the biological processes in our brains . Those who believe in the theory of emergence aim to equate mental phenomena with operations of the brain and to explain them all in scientific terms. This project is often called ‘cognitive science’. However, Sam Harris has argued that the idea of emergence does not really solve the hard problem of consciousness but acts as a “restatement of a miracle” akin to the Big Bang Theory —with emergence something is still posited to arise out of nothing in the great traverse from unconsciousness to consciousness.
- Quantum Theory of Consciousness: Based on the most profound theories from quantum physics, this theory stipulates that consciousness and the physical world are complementary aspects of the same reality. When a person observes the physical world, that person’s conscious interaction causes discernible change. Schrodinger’s Cat is the famous example from Quantum Mechanics that exemplifies this and those familiar with the Eastern philosophies of non-dualism may intuitively understand this theory.
- Consciousness as Reflexivity: The first level of consciousness is sub-consciousness, which is where the majority of human intelligence lies and the subconscious powers our ability to make decisions like spot a face. Some AI, such as Google’s face recognition software, can already do this. However, the more critical component of consciousness is the ability to maintain a wide range of thoughts. These thoughts are accessible to other parts of the brain, and it’s this ability to access thoughts simultaneously that makes long term planning possible and gives us the qualitative sensation of consciousness. Consciousness is therefore presented as a sort of meta-cognition involving higher-order thought processes and an awareness of one’s own thought processes. Our ability to maintain a wide range of thoughts broadens our temporal window of the world and comes into play when we need to maintain sensory information over a few moments.
Viewing consciousness through any of these lenses suggests that AI could become conscious. With the quantum theory, it’s possible to imagine that AI endowed with some biological materials interacts at the quantum level in the same way that humans do. With the material view of consciousness, it’s possible to imagine consciousness “emerging” at some point from heightened computational intelligence. Finally, and perhaps most interestingly, we can consider the theory of consciousness as reflexivity. Today, AI is already being developed that contains an element of reflexivity. Last year, Deep Mind developed a deep learning system that can keep data on hand during calculations. Will this lead to a sort of meta-cognition that appears equivalent to consciousness? Could AI develop its own language of internal states? It’s possible to imagine that AI will develop the sort of meta-cognition that we associate with consciousness. In time, computer scientists may develop a new “machine phenomenology.”
My personal observation has been that computer scientists tend to believe that consciousness will arise from AI whereas many physicists and philosophers think there is something more complex about human behaviour. I tend to agree with the latter camp but acknowledge that I want there to be “something more.” Perhaps, the most enduring gift of AI will not be Spotify’s Discover playlists but the new window it offers us into one of the greatest scientific and philosophical questions of all — what is consciousness?
Bibliography (not exhaustive):