How Artificial Intelligence is Making Us Rethink Consciousness, by Ross Fretten

Ross Fretten
8 min readAug 22, 2017

Typically when artificial intelligence and consciousness meet in same conversation it revolves around two things; whether or not an artificial intelligence can be truly conscious and whether or not we should – morally – be aiming to create consciousness, assuming we can. There’s also this slightly more fringe discussion going on around uploading our consciousness into a computer so that we can continue to exist beyond our physical bodies too. This is the most extreme example of what is known as transhumanism.

Irrelevant of what the answers are, and it’s safe to say there will be no answers any time soon, now is the right time for us to evolve or abandon our current definition of what consciousness is because it’s clearly wrong. If we don’t, we risk stifling more pertinent conversation around artificial intelligence and we open ourselves up to the risk of underestimating and misunderstanding that which we are creating.

The Hard Problem

Understanding consciousness is referred to in the scientific community as The Hard Problem, simply because it’s a really, really, really, ridiculously difficult problem to solve. The Hard Problem of consciousness is distinct from Easy Problems because there is no identifiable mechanism that results in consciousness. Easy problems like hunger are easy to understand because the sensation/instance is attributable to a condition (lack of substance in the stomach) that triggers a mechanism, which causes us to experience the sensation of hunger in our state of mind… it’s observable. Understandably a wealth of philosophical discussion exists around consciousness, most of which can largely be summarised by briefly explaining the two dominant philosophical views; Physicalism and Dualism.

Physicalism proposes that mental states such as consciousness are the result of a physical trigger. Simply put, you think and feel as you do because of something physical within your person. That might be gut-bacteria in your digestive system, it might be chemicals such as dopamine and serotonin in the brain or any other physically occurring thing. Your mind and body are one. Dualism asserts that mental states including consciousness are separate from your physical self, that something not too dissimilar to what some would regard as a soul or spirit exist that experiences the world independently of the body and brain. Your state of mind and your physical body are separate.

Old Consciousness and Artificial Intelligence

When consciousness and artificial intelligence are discussed, there is a tendency for Dualism to be accepted as the correct theory of consciousness despite the scientific community leaning toward Physicalism. This is where the danger is, in my opinion.

“Our artificially intelligent systems today cannot and should not be be regarded as lacking what we traditionally call consciousness.”

When we accept Dualism as correct it becomes incredibly easy to belittle the artificial intelligence we create as lacking consciousness and by extension being incapable of free-thought and by extension the degree of unpredictability that humans exhibit. Looking at real-world examples of artificial intelligence as it exists right now, it’s easy to see why it’s laughable to consider even the most sophisticated artificial intelligence as conscious but I propose that there is no such thing as consciousness and that our artificially intelligent systems today cannot and should not be be regarded as lacking what we traditionally call consciousness.

Consciousness versus a lack of consciousness is treated as a binary distinction, something is either conscious or it is not. It’s how we’ve considered consciousness even before the artificial intelligence discussion so it’s no surprise this idea has bled through. The term “consciousness” has so much baggage that it needs to be abandoned, along with the black and white thinking that goes with it, replaced by something similar to the Glasgow Coma Scale to allow us to check off criteria that will plot where all things from plants right through to humans and of course, artificial intelligences, belong on a multidimensional new consciousness spectrum.

Time to Move On

Consciousness in Humans and Why it Doesn’t Hold Up

There’s so much wrong with our current understanding of consciousness it doesn’t even hold up to it’s own definition any more. The definition largely revolves around alertness or awareness of ones surroundings – being able to understand that there is a tree in front of the conscious being and for that being to experience a subjective thought or feeling about that tree, such as admiring it’s beauty and saying “wow”. This is known as qualia – a key component of consciousness and therefore something that an artificially intelligent system cannot experience by definition. Qualia is, in my opinion, simply a combination of associative activation, genetic and cultural disposition and experiential priming with some human nature thrown in.

Associative activation is quite a simple psychological phenomenon where a person is triggered to recall a memory or revisit a state of mind as a result of being exposed to stimulus that resembles something from a past experience. For example, a particular smell or taste might elicit a feeling of comfort, warmth or terror in somebody who has experienced that smell or taste around the time of a particularly comforting or traumatic experience. Genetic and cultural disposition as well as experiential priming would cover our inherent nature as influenced by our genes as well as the cultural influence on our perspectives as well as how our individual experiences have biased our subjectivity. Human nature covers off our psychological biases that all tend to shape our behaviours towards either surviving or having sex.

“Consciousness has until recently been regarded as a uniquely human property. This always seemed ridiculous to me.”

To take the tree example of earlier, we will typically “wow” or find beauty in things we believe to be scarce, such as an extreme demonstration of lightening, a remarkable redwood tree or the disarming reds of a sunset. These are subjective experiences and interpretations of beauty — qualia — but I would argue these are no different to us appreciating a glass of water when we are thirsty; it’s the body positively reinforcing a behaviour with a view of us helping us live longer. Without water, we would obviously die. Without seeking the new, our ancestors wouldn’t find new sources of water or food for their ever-expanding tribes, so it makes sense that we’ve developed a psychological bias to seek out and “wow” at the scarcely encountered. It’s reasonable to say our subjective perspective, such as what beauty is would be influenced heavily, if not shaped entirely by these factors — no magical notion of a special state of mind called “consciousness” required.

Consciousness in Animals and Plants

Consciousness has until recently been regarded as a uniquely human property. This always seemed ridiculous to me, serving only to massage our egos as human beings that we are top dog and special. This is another inherent trait in human beings, the need to feel special.

Meta-cognition, which is to be aware that you are thinking was previously considered a unique quality of consciousness and human beings, but then bees were observed to be second guessing their decision making – evidence of meta-cognition. Crows have been known for a long time now to use tools and more recently crows have been shown to hold grudges against people, something which not only demonstrates alertness and awareness of their surroundings but subjectivity in their interpretation of it — qualia. Whales have been observed demonstrating empathy and the most loved whale of all, the dolphin, is well known to be extremely emotionally intelligent. There have even been examples of dolphins in romantic relationships with humans (yes, that includes the gross stuff) and at least one dolphin has been known to commit suicide from a broken heart after being separated from his human lover.

“With a multi-dimensional spectrum we can cater to these nuances of being, of existing and only then will we be able to tackle the conversation of artificial intelligence fully equipped.”

This is why the traditional, binary nature of consciousness is antiquated and needs to be superseded by a new model of consciousness. Is a bee as alert, as aware and as intellectually capable as a human? Absolutely not. But does the bee, the crow, the dolphin exhibit qualia? Absolutely. Plants and trees — despite not having brains — are now known to behave differently at night than during the day suggesting a higher level of alertness than previously believed. Throw a brain-damaged human into the conversation if you really want to emphasise the point that the simplistic, binary view does not work. With a multi-dimensional spectrum we can cater to these nuances of being, of existing and only then will we be able to tackle the conversation of artificial intelligence fully equipped.

Artificial Intelligence on the New Consciousness Spectrum

A large part of the argument for why we cannot create consciousness centres around attention. We know very little scientifically about what attention is and how we pay attention or what to. The consensus is that attention is the manifestation of emotional arousal, i.e. we pay attention to that which elicits the strongest emotional response from us. Intelligence and rational thought are viewed as separate from emotional arousal and this is — in simple terms- the argument that we can never program the ability to pay attention into an artificial intelligence.

Much like I’ve previously argued creativity is merely a complex intellectual exercise shortcut through intuition I would argue that emotion is most probably an extremely complex neurological condition that is incredibly sensitive to context and myriad other variables and that is why we haven’t been able to truly understand qualia yet – it’s simply too complicated.

We could then, create an entity that is as capable as a human being to exercise free will, pay attention and reflect on memories, to react appropriately and emotionally in response to stimulus or context in order to increase it’s survivability and well-being if we can provide it with the following:

  • Social and cultural context to exist within.
  • A short life-time's worth of memories and experiences that can be referenced.
  • Fundamental principals to shortcut natural selection and epigenetics such as experiencing fear when in danger, avoiding fire and falling etc.
  • The ability to interpret cause and effect relationships and rationalise and contextualise these interpretations through their impact and/or relation to the fundamental principals.
  • Environmental awareness and the ability to focus attention on that which is most pertinent at the time to both short and long-term goals.
  • The ability to analyse the effectiveness of it’s own decision-making based on predicted and actual outcomes and the impact and/or relation to the fundamental principals.

“Transferring ourselves into a computer would merely be destroying ourselves and creating a new instance of our memories and personalities – a digital doppelgänger”

What we should be talking about is how we do this, not whether or not we can attribute the nonsensical adjective of “conscious” to an artificially intelligent system. Once we assume this point of view we remove the liquid safety net that artificial intelligence is harmless as long as it isn’t conscious. It also raises a huge issue around uploading ourselves into a computer. After all, if consciousness is neither tangible nor quantifiable then transferring ourselves into a computer would merely be destroying ourselves and creating a new instance of our memories and personalities — a digital doppelgänger.

There is no black and white cross over point into consciousness, it’s just incremental, micro-steps along a very grey spectrum. It’s likely that only through our creating of more advanced artificial intelligences will we truly come to understand our own state of being as we recreate it, piece by piece. Where we go from there, who knows.

--

--

Ross Fretten

Founder & CEO of Kibble - Dog training & health app, The Apprentice 2017 Candidate, Entrepreneur and Award Winning Digital Product Designer.