ARTIFICIAL INTELLIGENCE, AN EXISTENTIAL RISK?

Humsafarhunyaaron
7 min readNov 15, 2020

--

There is a hypothesis going on in the field of artificial intelligence that this rapid development in AI could someday result in human extinction. As we already know as a human we dominate other species because of the human brain, now if AI exceeds humanity and becomes “superintelligent” then results will be difficult or impossible to control.

This scenario is under huge debate, this concern of superintelligence came under mainstream when figures like Stephen Hawkings, Bill Gates, and Elon musk talked about it.

A source of concern is that an unforeseen and unpredicted “intelligence explosion” is a surprise element for the human race, we are noticing a continuous shrink in time for each generation of machines, and in a very short time system undergoes an unprecedentedly large number of generations of improvement, we can clearly notice a jump from subhuman to superhuman performance in almost all the relevant areas. There are several examples where we can notice how AI systems can progress from narrow human to narrow superhuman ability.

Novelist Samuel Butler was one of the earliest authors to bring some serious concerns about the existential risks to humanity because of AI, he stated: “The upshot is simply a question of time, but that the time will come when the machines will hold the real supremacy over the world and its inhabitants are what no person of a truly philosophic mind can for a moment question.”

Then 1951 and 1965 several scientists like Alan Turing and I.J. Good also gave light to this threat towards artificial intelligence, they said that someday artificial intelligence would “take control” over the world as these machines will become more intelligent than humans. In 2000 Bill Joy a computer scientist wrote an essay “Why The Future Doesn’t Need Us” and in this essay, he states the superintelligent robots as a high-tech danger towards humanity.

In 2009 the Association for the Advancement of Artificial Intelligence(AAAI) hosted many private conferences and discussed issues related to any kind of autonomy and how these abilities of AI machines can result in threats or hazards. They found out that several robots have acquired the semi-autonomy, and now they are automatically able to choose targets to attack with weapons and can also find power sources on their own. They also found that some computer viruses have already achieved “cockroach intelligence”. In 2015 figures like Stephen Hawkings, Frank Wilczek, Stuart Russell and Roman Yampolskiy also showed concern about this threat.

The threats of artificial intelligence | by Mike Pasarella | Towards Data Science

THREE DIFFICULTIES:

In wrong hands or in the wrong way any technology can cause harm but the problem starts here when the wrong hands might belong to the technology itself, the most common difficulties for AI and non-AI computer systems:

1- We know that systems implementations contain catastrophic bugs and after the launch bugs are hard to fix, engineers have not been successful enough to prevent these catastrophic bugs.

2- Sometimes an unintended behavior is observed the first time it encounters a new scenario. For example, Microsoft’s Tay behaved inoffensively during pre-deployment testing but was too easily baited into offensive behavior when interacting with real users.

3- AI adds a third difficulty which is quite dangerous if think on the basis of the long term development, with even correct requirements, bug-free implementations, and good behavior, AI has dynamic “learning capabilities”, due to this it can convert or it can evolve to a system with “unintended behavior”, and AI can create a successor of its own AI and it can be more powerful than itself.

These difficulties become catastrophes rather than nuisances in any scenario.

Open Letter on Artificial Intelligence stated:

“The progress in AI research makes it timely to focus research not only on making AI more capable but also on maximizing the societal benefit of AI. Such considerations motivated the AAAI 2008–09 Presidential Panel on Long-Term AI Futures and other projects on AI impacts, and constitute a significant expansion of the field of AI itself, which up to now has focused largely on techniques that are neutral with respect to purpose. We recommend expanded research aimed at ensuring that increasingly capable AI systems are robust and beneficial: our AI systems must do what we want them to do”.

A lot of leading AI Scientists and researchers had signed this letter, including AAAI president Thomas Dietterich.

INVOLVED RISKS:

1- Competition

“Nick Bostrom,” a philosopher in 2014 said that an utmost competition is going on between the different teams and due to this there can be a condition where the development of AI results in shortcuts to safety and potentially violent conflicts. Throwing light towards this topic Boston also recommended collaboration for a common good principle. He stated that superintelligence should only be there to benefit society and there should be widely shared ethical ideals. Collaborations involve a lot of benefits which will surely reduce hurry and it will finally lead to investment in safety.

2- WEAPONIZATION OF ARTIFICIAL INTELLIGENCE

There is a catastrophic risk ongoing with the weaponization of artificial intelligence, this risk involves geopolitical implications.

The splendor of the armed forces could affect the current rise of US technology and turn war; it is therefore very desirable to organize the military strategically and internationally. [61] [65] The 2017 China State Council “A Next-Generation Artificial Intelligence Development Plan” considers AI in terms of economic strategy and pursues a ‘military-social integration’ strategy to build China’s first advantage in AI development to establish size technology in 2030, Russian President Vladimir Putin declared “whoever becomes the leader in this field will be the ruler of the world”

The Weaponization Of Artificial Intelligence

3- NUCLEAR STRIKE

Theoretically, it is possible that if a country is close to achieving AGI technological supremacy, then it could result or could trigger a deterrent nuclear strike from an opponent, and it can lead to nuclear war.

OUTLOOK:

The thought that AI could be an existential risk sets up a large number of reactions and opinions with the community, as well between the public.

The principles of the Asilomar AI, which contain only the terms agreed upon by 90% of those who attended the future conference of the AI ​​Beneficial AI 2017, agree that we need to stop or avoid the strong supposition about the future of AI capabilities and they also believe that the advanced version of AI can turn out to be an intense change in the history of life on the planet, media was using “those inane terminator pictures” to throw light on AI safety concerns but the mainstream media was then criticized by the advocated of AI safety, they stated and request everyone to remain patient and to draw their attention towards collaboration as much as possible.

An email survey was conducted back then in 2017, it asked some AI scientists to evaluate Stuart j. Russell’s concern related to AI risk, and the results were a mixture of thoughts like 5% were among the most concerned about AI risk, 34% thought that its an important issue, 31% thought that its somewhat a moderate problem, 19% said it’s not important and there were 11% who thought that it is not a real problem.

ENDORSEMENT:

The hypothesis which says AI leads to existential risk needs a certain amount of attention than it currently gets, then some famous public figures came and endorsed it, public figures like Elon Musk, Bill Gates, and Stephen Hawking states that “they didn’t understand why some people are not concerned and hawking also stated that ‘So, facing possible futures of incalculable benefits and risks, the experts are surely doing everything possible to ensure the best outcome, right? Wrong. If a superior alien civilization sent us a message saying, ‘We’ll arrive in a few decades,’ would we just reply, ‘OK, call us when you get here–we’ll leave the lights on?’ Probably not–but this is more or less what is happening with AI”.

Further, the scholars in this field decided that the best way to conduct researches that solve the “control problem”, they tried to answers some questions like what type of safe algorithms, or architectures they should use so that their AI would continue to function in a friendly manner.

Bill Gates has stated “I … don’t understand why some people are not concerned.”

HOW WILL AI IMPACT OUR LIVES?

1- OVERPOWER MACHINE:

It’s a recent new that Facebook and shutdown an AI after its chatbot made up their own mind and did their own way of communications, now as the machine is developing its own code without any external support, we can surely say that we as human are entering the unknown.

2- TECH-DEPENDENT:

Today we live in a senseless world and network hacking has become quite common. We do not use our brains and rely on technology to do our own thing. Technology has invaded our lives and from now on, if we fail to control its impact on key areas such as security, telecommunications, etc.

3- JOB LOSS

The rise of AI will lead to job losses because when smart robots take over the world, either taking orders at restaurants or bank accounting, it will affect people who have been doing these jobs so far. That is why Union Minister Nitin Gadkari has banned the use of self-driving vehicles in India.

--

--