Should there be a self-conscious AI?

Sankalp Shangari
HashTalk
Published in
3 min readSep 27, 2018

Did you ever find yourself waking up to reality from a cluster of thoughts? Most of the thoughts arise from the need of our brain to comprehend the contents of other human brains. This intellectual espionage is the Theory of Mind that stands as a major facilitating element of human evolution.

Since the time humans started living in groups, they have been attempting to understand the possible reasons prompting the actions of fellow human beings. It was and continues to remain crucial for us to prevent danger and to empathize with others to collectively find solutions to human problems.

However, when Suzie starts thinking about what Emma has in her mind, she might attempt to understand what Emma thinks about her. The first instance being the First order of Theory of Mind and the latter being Second order of this module of mind. It starts becoming a bit problematic when Suzie reaches the third order of theory of Mind when she starts questioning what Emma thinks Jacob thinks about Nina.

What if all these names are assigned to different AIs and this is the thought process of one AI? Is it possible to endow the Theory of Mind to an AI?

More often than not, this Theory of Mind which is supposed to bestow us with the power of understanding emotions, turns on to us. We get stuck in self-created loop of self-conscious thoughts that don’t lead us anywhere. Majority of times, the explanation we have for our own behavior is faulty whereas we are more accurate in giving meaning to other people’s actions.

In the present technological realm, where we are constantly endeavoring to replicate ourselves in the form of self-conscious machines, it is extremely crucial for us to recognize our miracles and idiosyncrasies.

Animals also exhibit the characteristic of the Theory of Mind that helps them to communicate with one another. Dogs, best known for being companions to humans for centuries, have the most developed sense of self-consciousness. It is evident from the way they sit in anticipation, trying to predict if their human friend will give him food or take him out for walk.

In similar manner, Artificial intelligence might develop this Theory of Mind when living in close proximity with humans as their domesticated companion. An AI that can demonstrate Theory of Mind will be able to develop ‘understanding’ of human actions and not just ‘observing’ it. It’ll facilitate self-evolution of AI into intellectual yet empathetic being, like it did for humans. But what if an AI developed this trait in an environment where most of the stimuli denote threat? What if an AI assigns danger to fellow human’s or AI’s actions more than it empathizes with them?

If Artificial Intelligence develops all traits of human mind, including its consciousness, then there is a great possibility that its conscious can turn on it like it does for us.

--

--

Sankalp Shangari
HashTalk

Investment banker turned tech entrepreneur and investor. Author, speaker, angel investor