Does an AGI need to be conscious?

Peter Voss
2 min readJan 12, 2017

--

‘Consciousness’ (like intelligence) is another ‘suitcase word’ — it has many quite different meanings that are ‘thrown into a suitcase’, with a label slapped on the whole jumble. So let’s unpack it a bit.

We don’t need to concern ourselves here with the absurd notion that ‘rocks are conscious’, or that ‘everything is’ — we can just concentrate on what is relevant to AGI.

Consciousness at its base refers to awareness: Is the entity absorbing stimuli from the environment? Is it responding? However, we usually only apply these terms to living things (and not, for example, cars) — i.e. things that can also be unconscious. Be that as it may, AGI is a special case because while it is a machine, we still expect it to do human-level cognition; to have mental processes; to have a mind.

One can actually bypass much of the definitional debate by concentrating on what kind of consciousness-like properties an AGI needs. Here things become much clearer. A key property of human consciousness is that humans have conceptual self-awareness: we have abstract concepts for our physical self (my body), our mental self (my mind/ thought processes), as well as of an integrated whole (me), which also includes our emotions, experience, history, goals, etc.

An AGI must have all these same properties!

AGI needs to be able to conceptualize what actions it took, versus external ones. Moreover, it need to conceptualize what actions it is currently (and may potentially be) capable of, what their likely effects are. It need to understand which kind of actions will affect it, and which will affect others. It need to be aware of its cognitive processes in order to monitor and potentially modify them (meta-cognitive control). Finally, it need to have a theory of mind to be able to understand other entities motivation in context to be able make sense of their actions and to interact with them appropriately.

Qualia: I will not address this in great detail because I see it as a non-issue for AGI, and a huge philosophical boondoggle. ‘Qualia’ are analyzed by unpacking what ‘something feels like’. It presupposes some common mode of experience, which in turn presupposes an overlap in sensory/ emotional machinery. Unless we painfully emulate human embodiment there will be no common ground between AGI and human. Even then, cognition will operate so differently that there will be a huge gulf between us and them. This does not mean that AGIs won’t be able to intellectually understand human emotion and experience (and to some extent vice-versa), it just means that their experience will be very different (presumably dramatically less sensory and emotional input automatically tied to cognition).

--

--