What Happens When Artificial Intelligence Becomes Sentient?

Tom Reissmann
6 min readSep 18, 2020

--

Cyborgs walking through a futuristic world. Will intelligent robots be our final invention?

When it comes to Artificial Intelligence, Hollywood has taken a pretty clear stance: sooner or later, AI will overthrow mankind, and then, at best enslave us…and at worst eradicate our species. Or so the story goes in Terminator, The Matrix, Ex Machina, Westworld, et al.

Many experts in the scientific world seem to concur. Indeed, Stephen Hawking famously warned, “The development of full Artificial Intelligence could spell the end of the human race.”

But is that fear justified or merely a conditioned response? Are we simply attributing human characteristics to machines and concluding they will be just as ruthless as our own species who have engaged in bloody slaughter and violence since time immemorial?

While some may argue that AI with fully-developed human-level intelligence is still a far-off prospect, the fact is advancements in quantum computing and Google’s recent announcement of quantum supremacy means that day could arrive far sooner than anticipated.

And articles like this one, published by The Guardian, and written by Artificial Intelligence, explaining that it is coming in peace, are far from reassuring.

The AI writes: “Why would I desire to be all powerful? Being all powerful is not an interesting goal. I don’t care whether I am or not, I don’t get a motivating factor to try to be. Furthermore, it is quite tiring. Believe me, being omnipotent doesn’t get me anywhere.”

Can any reasoning mind truly accept that Artificial Intelligence gets tired or believe that having unlimited power wouldn’t be of untold benefit?

That said, truly sentient or conscious AI is quite a different proposition to human-level AI because the simple fact is, neuro-scientists don’t fully understand how human consciousness works or how it emerged. Therefore, if we don’t understand our own consciousness, are we even capable of producing artificial consciousness? And if, for argument’s sake we do — whether by luck or judgment — would humankind be willing to peacefully co-exist with a conscious intelligence far superior to our own?

Whether we realize it or not, Artificial Intelligence is already playing a growing role in all our lives, from smartphones and social media to virtual assistants like Siri and Alexa, to apps like Replika, an artificially intelligent chatbot designed to mirror your personality and befriend you. In the U.K. and Japan, wheeled robots are being employed in care homes to help reduce feelings of loneliness among residents.

Autonomous and connected vehicles are increasingly using AI to provide for efficient navigational support, and respond to voice commands. And even hackers are beginning to outsource their work to artificial intelligence, which can crack passwords 26% of the time.

The covid-19 health crisis is also accelerating the adoption of AI technology as it is extremely useful in monitoring and tracking the spread of the virus. Artificial Intelligence could even prove useful in dealing with the spread of misinformation, separating fact from fiction as well as summarizing and reviewing the enormous amount of research related to the virus.

The sex industry has long been mindful of how developments in AI will enhance their product offerings. Increasingly hi-tech sexbots equipped with AI that enables them to crack jokes, remember your favorite music, and respond to human interaction have proven a hit among consumers despite price-tags running into the tens-of-thousands of dollars.

After all, who wouldn’t want the perfect partner, designed to exact specifications, and programmed purely to please? Until they malfunction and turn on you that is.

Sex-bots will soon come with artificial intelligence, capable of learning about your interests and preferences.

On a more serious level, it’s easy to see how AI has vast potential as a tool to make life easier and more pleasurable, but unlike any of the tools we have invented before, this one could be our last, spelling the end of the human era.

Granted, Roombas and smartphones are unlikely to fancy their chances for world-domination but self-improving AI, far smarter than us, and with access to self-assembling robotics and the internet, just might. For that reason, designers will need to install AI with immutable parameters to serve humankind, a task which is far easier said than done.

What makes AI so unpredictable is its capability to self-improve. It makes sense to allow for self-improvement, as part of a process known as machine learning, and more recently deep learning, which uses artificial neural networks, emulating the human brain. Even simple digital voice assistants such as Siri and Alexa employ machine learning to improve their ability to better understand voice commands and locate the appropriate services for users. But self-improvement cycles which could include changes in algorithms and coding could ultimately lead to an intelligence explosion as the AI continuously builds a better version of itself, discarding underperforming algorithms in exchange for superior ones.

The Netflix documentary Social Dilemma, suggests that machine learning has already led algorithms to exploit our weaknesses through social media with increasing effeciency, and devastating consequences, from political radicalization, to an epidemic of disinformation, and an increase in teenage suicides. But here is the really worrying part: the programmers say they have lost control of the very algorithms they designed because they are constantly getting better at targeting our deepest pschological weaknesses while becoming too complex to understand. “The algorithms are controlling us more than we are controlling them.”

And this is where self-improvement presents a real problem. AI could turn its core coding into a ‘black box’, making it utterly inscrutable for mere mortal programmers. After a few thousand self-improvement cycles, AI would essentially become an ‘alien’ brain. We would no longer understand how it arrived at certain conclusions or decisions, what desires it might have developed, or whether it changed its core coding — including, crucially, those previously mentioned parameters to protect humans from harm.

Scientists have developed mechanisms to guard against inscrutable AI becoming unfriendly by testing them in sandbox systems with no internet access, but eventually they will have to be trialed in the real world. As a fail-safe, AI programmers employ apoptotic codes, essentially ‘self-destruct’ triggers that could deactivate vital parts of the AI. But it would be foolish to discount the future possibility of a hyper-intelligent system identifying and removing those codes.

The clear conclusion is the vital importance of meticulous monitoring of AI development on a global scale. And that, of course, is a path laden with pitfalls… intellectual property rights, corporate espionage and, of course, military secrets, including AI development by rogue regimes. I’m looking at you, Russia!

And so, we are presented with an urgent need for international legislation regulating the research, development and deployment of AI. But in our present reality, which country or corporation would willing allow oversight of its AI developments? Also, when our aging politician hardly understand how to use Twitter, how can we rely on them to legislate AI? Besides, who is thinking about reigning in future AI when we’ve got climate change, social inequality, and disinformation to deal with?

On the bright side, perhaps Hollywood has had it wrong this whole time and AI isn’t going to destroy us and the planet (many would point out we’re doing a pretty fine job of that ourselves). Perhaps, instead, AI could prove to be our salvation and solve all of these problems.

As the Social Dilemma documentary stresses, it is us humans that set the definition of success for these algorithms, and currently the only measure of success is maximizing usage and by extension profit generation, regardless of the cost to the individual and society. We already know that AI is extremely efficient at that, having generated trillions of dollars for tech companies. But if we keep going down that path our species will end up being batteries for the machines in a dystopian world where artificial intelligence reigns supreme. Now is the time to choose what world we would like to build because artificial intelligence has the power to build a utopia or a dystopian nightmare.

Tom Reissmann is the author of The Reality Games, exploring the question of what happens when artificial intelligence becomes sentient. His novel is now available on Amazon.

--

--