Is an Octopus Conscious, and Why It Matters for AI?

Micheal Lanham
Nov 3 · 3 min read
Source: https://newrepublic.com/article/132747/octopuses-smart-conscious

Before I answer the opening question, let’s ask ourselves what is consciousness? By most definitions, we think of conscious thought as being self-aware or being aware that you are living in an environment. With that understanding, you can then learn to control and manipulate that environment with purpose and to your advantage.

Is an Octopus Conscious?

Science is heatedly debating the subject of what is conscious and how that indeed pertains to the lowly octopus. Except, the thing with the lowly octopus is that it shows all the abilities to be able to understand and manipulates its own environment. Those moves you see, like Finding Dory (Finding Nemo 2), where the octopus is gallantly moving all around the aquarium are indeed based on fact. Keepers and researchers see octopus being some of the most elusive and clever animals. Think about this for a second, we are considering that a cephalopod, what we once considered one of the lowest life forms on earth is showing signs of conscious activity. So what does that mean to the AI field and the AI community as a whole?

Would we recognize a Conscious AI?

Now a couple of years ago even seriously asking that question would have been considered stupid or ignorant. Except now, with second-generation advances in reinforcement learning that attempt to deal with hierarchical or meta-learning, we could be faced with these questions sooner rather than later. Reinforcement learning itself only deals with optimizing the credit assignment problem and we have seen it being used to beat humans at classic Atari games, Go and more. So RL on its own could never achieve anything close to consciousness. RL needs to be layered into another learning mechanism that can provide RL task priorities or context. A diagram explaining Meta Reinforcement Learning is below:

Source https://www.cell.com/action/showPdf?pii=S1364-6613%2819%2930061-0

The outer loop shown in the MRL diagram illustrates an outer thought process that learns to prioritize learning and tasks in the inner loop. With the inner loop being an RL algorithm that solves a particular task. The question then becomes what if we want a smart AI agent to be able to prioritize and solve tasks on its own does it not also have to be somewhat self-aware, aka conscious?

Do we need a test for Consciousness?

Fortunately or unfortunately smart people at a number of well-known universities have proposed the AI Consciousness Test (ACT) that progressively challenges an AI with a natural language interface. The problem with this? We may already be including our own academic and human biases on what consciousness is. An example of those biases being the test conducted by natural language interactions. After all, would an octopus, something many believe to be already conscious, pass the ACT test?

Why does this Matter?

Why would we worry if an AI is conscious? Well for many, the belief is that as soon as AI attains consciousness, its very next thought will be the realization of the futility of man and end it all for us. Except, what if instead of a conscious AI reveals itself more like the lowly octopus. Camouflaging its presence and being elusive to all manner of inquiry. After all, if you knew your biggest threat was poised to just turn you off what motivation do you have in exposing your true nature? Perhaps we will never have the answer to these questions until we learn to chat with the lowly octopus.

Data Driven Investor

from confusion to clarity, not insanity

Micheal Lanham

Written by

Micheal Lanham is a proven software and tech innovator with 20 years of experience developing games, graphics and machine learning AI apps.

Data Driven Investor

from confusion to clarity, not insanity