Some Thoughts on Artificial Intelligence Singularity

Singularity is a mathematical term used in many different contexts. In technology, specifically in the domain of artificial intelligence, this term has a far-reaching philosophical, social and technological impact that will shape the future of humanity and the direction of technological advancement.

The Singularity will mark the moment when artificial intelligence becomes smarter than humans. As Wikipedia defines it:

The technological singularity (or the singularity) is the hypothesis that the invention of artificial superintelligence (ASI) will abruptly trigger runaway technological growth, resulting in unfathomable changes to human civilization. According to this hypothesis, an upgradable intelligent agent (such as a computer running software-based artificial general intelligence) would enter a “runaway reaction” of self-improvement cycles, with each new and more intelligent generation appearing more and more rapidly, causing an intelligence explosion and resulting in a powerful superintelligence that would, qualitatively, far surpass all human intelligence.

With the advent of techniques like Generative adversarial networks, such possibilities of AI Apocalypse do not seem impossible. The problem becomes more complex when many emerging disruptive technologies like quantum computing, AR/VR, Blockchain, 3D/4D printing ensemble, and work in a manner not yet conceived by the human mind.

The thoughts on this subject to me are not new. Almost 22 years ago when I watched the movie ‘Terminator’ for the first time, I was fascinated with the thoughts of time-travel, and how possibly the humans can save themselves from the possibility of existential termination due to the rise of the machines. Such human thoughts go back to the days of the industrial revolution when the machines or automata were compared to Moloch, an ancient Canaanite deity associated with child sacrifice. The great minds of Computer Science like Alan Turing expressed their fears on this subject. Recently, the posthumous publication of Stephen Hawking’s book called Brief Answers To The Big Questions raises this topic once again. As Vox noted,

Hawking’s biggest warning is about the rise of artificial intelligence: It will either be the best thing that’s ever happened to us, or it will be the worst thing. If we’re not careful, it very well may be the last thing

As I was revisiting the subject, I found these two impressive viewpoints shared by experts in the field published by the MIT Technology Review:

No, the Experts Don’t Think Superintelligent AI is a Threat to Humanity

Yes, We Are Worried About the Existential Risk of Artificial Intelligence

The former post is an opinion by Prof Oren Etzioni, who is CEO of the Allen Institute for Artificial Intelligence and Professor of Computer Science at the University of Washington. The later post is by Allan Dafoe, who is an assistant professor of political science at Yale University and Prof Stuart Russell, who is a professor of computer science at the University of California, Berkeley.

At the backdrop, they discuss the work of Professor Nick Bostrom, a Swedish philosopher at the University of Oxford and the author of the philosophical treatise Superintelligence: Paths, Dangers, and Strategies.

The viewpoints and convictions of the experts on this subject may be summarized as follows: In imminent future (i.e. next 30–70 years), there may not be an alarming rise of AI that our existence might be threatened but a long-term view should recognize the possibility of AGI posing an existential threat to mankind and should require us to work towards a responsible AI future.

#AI #Machine_Learning #Singularity #SuperIntelligence #ASI

(A scaled down version of this article appears on my linkedin profile)