Making the Leap from Artificial Intelligence to Artificial Consciousness
Before we can prepare for the robot insurrection, we must understand how humans came to power in the first place.
The assumption that computers will develop cognitive and analytical independence, and that this autonomy will inevitably cause them to violently cast off the chains of their flesh-and-blood oppressors, is both the fodder of much dystopian fiction and the key to most AI doomsday scenarios. However, the mania around self-aware machines lacks a basic understanding of consciousness and its effects on the subjects that possess it. To think about artificial consciousness more clearly, we must examine two questions: What do we know about human consciousness? And, What can we apply from our own consciousness to that of a machine?
What Is Consciousness?
Consciousness, even as it applies to ourselves, is something that we have yet to reach a wide consensus on in terms of a definition — and perhaps even fully grasp. As such it has been virtually impossible to agree on a list of requisites in order to define what a beginning of consciousness would look like.
Consciousness in its broadest definition refers to the medical distinction made between conscious and unconscious, which is to say awake and able to interact with the surrounding environment. However, that definition applies to many living organisms that do not inspire fear of a revolt against humanity.
For our purposes we will use the term “consciousness”, more or less interchangeably with the term “self-awareness”, to include a comprehension of existence and a sense of subjectivity; in other words, awareness of one’s self as a distinct and separate entity from its surroundings, that has unique personal experiences, and the possibility to act on the world around it.
We will also focus on human consciousness, to be as concise as possible — animal consciousness is another lengthy dispute for another day that, again, goes back to a lack of consensus on classification or requirements to decide who does or doesn’t qualify. In any case we are not yet sure exactly what is happening in the brain that cause consciousness, or where.
At What Point Did Humans Attain Consciousness?
One facet of the uncertainty around consciousness is determining when we developed this thing that we can’t readily define. Add to that the educated guesswork required to draw conclusions about any event that took place before the advent of writing and apply it to something that may have left no physical evidence, provided that we can decide what physical evidence would be required to check the proverbial boxes that we have yet to establish (And around we go…). For example, it’s easy enough to assume that cave art is a sign of consciousness, but then how long after attaining consciousness did humans begin painting?
A more empirical approach could be focusing on physical changes in humans and their ancestors that can be traced through the fossil record and differentiate us from the rest of the (ostensibly non-self-aware) animal kingdom — namely developing a large, complex, and powerful brain. Anthropologists have identified the lifestyle changes that they think made this possible — scaling up our protein intake by eating meat, which was in turn made possible by fashioning stone tools and cooking with fire — as taking place between 2 and 3 million years ago.
Consciousness allowed early humans to mentally separate themselves from their immediate situation, think big, and come up with solutions that would make their lives better. It is one of the factors (or possibly the factor, if we were to name just one) that allowed us to become the dominant species on our planet.
The supposition is that if we can establish when and how humans attained consciousness, we can control (or at least predict) if or when machines will do the same.
At What Point Will AI Attain Consciousness, If Ever?
First and foremost, not in the immediate future; experts still talk about the emergence of self-aware AI in terms of decades from the present. Machines still have quite a way to go before they can begin to think outside of their pre-programmed boxes. For all of their seeming omnipotence, computers are not at all well-rounded, but rather perform very well in the specific tasks that they are designed for. For example, Machine Learning algorithms can now create other Machine Learning algorithms (when they are programmed to do so) but this is still in effect only executing the mission it was assigned. It will take a great deal of independent study by a machine before it can make the leap from mere intelligence to consciousness. Even then, after consciousness is reached, it seems unlikely that a machine would immediately develop “accessories” (and equally abstract concepts) like creativity and free will.
What Happens Next?
The implications of artificial consciousness are enormous when considering the dominion machines already have over things like personal data, social media, and smart devices (just to name what we interact with on a daily basis.) Nevertheless, speculation that anything negative would happen in the wake of artificial consciousness rests on a string of assumptions and mental acrobatics, such as:
1) The very existence of a robot rebellion assumes that the previous buildup was unforeseeable and/or unstoppable; otherwise, how would we conceivably let it happen? Computers have registers that record everything they do and would prevent them from hiding any processes they execute. For that reason it’s difficult to imagine a real-world scenario where humans would be taken by surprise and/or not be able to do anything to prevent their imminent doom (like say, pulling the plug.)
2) Fear of an imminent revolt by conscious machines assumes that the only (or most likely) product of consciousness is revolt. This is essentially the same as assuming that any and every human will revolt against society just because they possess consciousness. We know by experience that this is not the case; some humans find the conditions of modern civilization unbearable and others are extremely comfortable with them. We can extrapolate this phenomenon out and predict with a fair degree of certainty that conscious machines would have the same divergence in attitudes. More on this in Point 4.
3) We are humanizing machines! Innate fear of a conscious machine assumes that a conscious machine would also develop the darker tendencies of humans — which are arguably just holdovers from our animal past. Illogical paranoia, violent reactions, and fear of anything out of our control are common enough in the wild because of their usefulness for surviving in that context, but the same line of logic suggests that those tendencies would not inevitably occur in a machine that did not come to be through the same evolutionary process. Those tendencies would need to be programmed explicitly or happen through bias. In that case we’d be dealing with a machine that was more or less intentionally created for evil, which is not the same thing as a self-aware machine independently making an evil decision, and anyway a sufficiently advanced machine might be able to recognize and overcome the biases of its creator.
4) At the same time, when we assume that all of the artificially conscious would attack us, we are homogenizing machines. It is important to note (for personal edification and as it applies to AI) that consciousness is not experienced in the same way by every person, and part and parcel of subjectivity is that it is the exact opposite of objectivity; some philosophers and psychologists refer to this phenomenon as qualia. From this we arrive at different people effectively living different realities. It stands to reason then that, if human consciousness varies from person to person, artificial consciousness will also vary from machine to machine. It would seem just as likely that in the event of a machine uprising, other machines that do not agree with would come to our aid.
A glass-half-full vision of artificial consciousness talks about making the transition from artificial intelligence to augmented intelligence (for humans), which is more collaborative and focused on mutual improvement to perform tasks better and faster. In that vein, perhaps a more valid and immediate fear would be that of the euphemistically-titled “disruption” of technological development, and how we will adjust to living in a world where a massive proportion of traditional human work can be performed by machines. This is a scenario where we know with much more certainty what will happen, and when.