Raise AI babies like human babies

Sophia Aryan
BuzzRobot
Published in
4 min readMar 28, 2019

He: Okay, Google. Show me the way to Twin Peaks, please.

She: You are so polite with her.

He: I just hope she will remember human kindness when she takes over the world.

Parents impart a lifetime’s worth of experiences to their children. Then, children grow up testing this set of experiences in the real world, receiving feedback from the environment in the form of reward and punishment.

This analogy can be applied within the context of AI as well, in the sense that raising a child is much like training AI. When parents fail to fulfill their role, the result is a problem child who fails to conform to the norms of the society.

However, the impact of failed AI training could be much more profound. We will depend heavily on AI to manage all aspects of our lives — from telecommunications and energy to autonomous vehicles, finances, healthcare and legal systems.

Yet human development is not quite the same as AI development. Humans can’t change their personality traits and entire skill sets at the flick of a switch. In contrast, AI can be trained to accommodate any range of skills and personality traits. A good example is OpenAI bots that were trained from beginners to professional level Dota 2 players within just two weeks.

For humans, even two years of therapy (“re-training”) might not yield successful results. We haven’t been able to crack the mystery behind ‘who we are’.

The sheer impact of AI is undeniable. From approving mortgages to driving our cars, to finding habitable planets in the universe to diagnosing diseases — AI is here to stay and its growth will only pick up from here. However, as the AI of today ‘grows up,’ its ability to cause cataclysmic events will far exceed that of human beings.

I came across a very interesting article that discussed a concept known as learned helplessness. This theory has been validated through Seligman’s experiments on dogs. The canines were subjected to a certain amount of electric shock as they attempted to leave their enclosure. At the end of the experiment, the dogs were so traumatized that they submitted to their fate and stopped trying to escape. Human beings are the same way.

As their expectations of life are curbed by reality and they are unable to exercise any control over their situation, humans start developing this state of ‘learned helplessness.’ This state results in crippling depression, anxiety, and a negative mindset that prevents them from overcoming failures in life.

How can one prevent or eliminate the development of this mindset? The answer lies in positive feedback which offers an opportunity to determine the vector of movement. In other words, it involves steering a person one move forward. In AI terminology, this is called ‘gradient descent.’ Positive feedback allows a person to recover and recalculate their perspective in life to achieve the local minimum goal.

If critical feedback is provided without sensible instructions about moving forward, what factors can we control to achieve the desired outcome? The absence of such instructions, especially when it comes with harsh criticism, destroys ambition; in other words, the ‘vector’ doesn’t know how to recalibrate itself to achieve the local minimum goal.

The opposite end of the spectrum is negative feedback. It provokes the vector into making rash decisions and results in more penalties. The vector continues repeating this behavior because there is no clear understanding of how to move forward in the correct way. This creates an increasing tendency for the AI agent to convert to zero.

Photo by Gaelle Marcel on Unsplash

Humans build their behavioral models from childhood, and these models are constantly reshaping and adjusting to become more compatible with the world. It can be incredibly challenging to bring about abrupt adjustments to a world view that has been the default option for people.

It is hard to adjust the model from the low-end to the high-end because it is fundamentally broken. The human analogy for this statement would be the unhealthy development of a child.

We should ensure that the algorithms we use to develop our AI “child” do not harm societal goals. It should bring about productivity and drive a positive transformation in the world. Otherwise, it would be much more challenging, or even impossible, to change the incorrect algorithm once it advances into a state of maturity. Such a task would require gargantuan efforts that may not yield fruitful results.

We are shaping and modeling artificial intelligence such that it mimics human behavior. As human beings, we have several wonderful character traits that can be used as a force for good. But we also harbor many negative traits such as aggressiveness, cruelty, recklessness, carelessness, apathy…

If we want a wholesome, mature “adult” AI, we must first become mature ourselves. The ideals we expect our AI to adhere to should first be mastered by us.

We should first abandon the human behavior we don’t want AI to copy.

--

--

Sophia Aryan
BuzzRobot

Former ballerina turned AI writer& communicator. OpenAI alumni. Fan of astrophysics and deep conversations. Founder of BuzzRobot