Curiosity: On the Potential Potency of AI

For almost as long as there have been computers there has existed the idea that they might one day rise up and take over the world. Recently, with advances in machine learning, the general concern for the realization of this scenario has greatly increased and not without credibility. Multiple big names in science and computing have expressed an amount of apprehension in the advancements and possible futures of machine learning techniques. Personally, though, I do not believe that there is great reason for concern due to the very nature of our current machine learning systems.

In early 2015, renowned theoretical physicist Stephen Hawking, businessman/inventor Elon Musk, and a considerable number of Artificial Intelligence experts signed an open letter urging the United States to forgo research into autonomous weapons (1). Hawking has issued many other concerns on the future of artificial intelligence and its potential potency to destroy civilization as we know it. Hawking once wrote that, “Success in creating AI would be the biggest event in human history. Unfortunately, it might also be the last, unless we learn how to avoid the risks” (2). Many other concerns have been issued throughout the technology and research communities in a hope to warn of possible dangers and to set a precedent where this uncharted territory is explored with a tone of caution. One of the greatest fears is that while more basic artificial intelligence is being developed, the machine will reach a point where it continuously compounds on its design improvements to a point beyond its designer’s control, this being called a “singularity” (3). This singularity event is supposed to cause a ‘world takeover’ as, in almost an instant, the machine’s intelligence and ability would increase exponentially, resulting in the machine being able to control anything it might be able to network to; the result of both this intelligence gain and the gain of control would, in theory, be the end of civilization as the ‘machines’ no longer need humans, and possibly even see us as harmful.

Currently, as it stands, I do not believe machine learning has an aptitude, or even great potential, to rise up and become the cause of either humanity’s extinction or enslavement. There is a vast difference between the current application of not atypically complex mathematics to the analysis of data and the foreboding superintelligences that have graced the screens of cinema and the pages of bestsellers for decades. For though I think that some ‘singularity’ event is not impossible, massive advancements in the field will be necessary to occur in precession. One of the many voices behind this view is Stanford Computer Science Professor Jerry Kaplan. Kaplan sees much of the concern to be non-sense perpetuated by Hollywood and that the little amount of concern which might be valid, can be comforted by the truth that potentially overwhelming Artificial Intelligence is quite far away. Kaplan views this as ‘an engineering problem, that we are developing advanced automation technology that may require professional standards and regulatory constraints, as occurs in many other fields from medicine to civil engineering’ (4). As further technologies are developed, their designs and functions will necessitate a variety of standards and regulations to be set, in a way which will prevent some dangerous uprising event. I think that one of the key, particular aspects which will prevent a ‘global domination’ will be in the functionality of these intelligence systems- as we continue to apply artificial intelligence, which is often modeled after basic patterns seen in human intelligence, to more and more problems, the advancements we make will be considerably linear along the line of functionality as opposed to innovations in conscious intelligence. The fullness of whatever ‘intelligence’ they have will be used in achieving the goals that we set before them. These steps forward will continue to see the machines help us, not for them to become us.

Sources:

1. http://futureoflife.org/open-letter-autonomous-weapons/

2. http://www.independent.co.uk/news/science/stephen-hawking-transcendence-looks-at-the-implications-of-artificial-intelligence-but-are-we-taking-9313474.html

3. Vinge, Vernor. “The Coming Technological Singularity: How to Survive in the Post-Human Era”, originally in Vision-21: Interdisciplinary Science and Engineering in the Era of Cyberspace, G. A. Landis, ed., NASA Publication CP-10129, pp. 11–22, 1993

4. http://www.forbes.com/sites/patricklin/2015/08/04/stanford-expert-says-ai-probably-wont-kill-us-all/#2c6a592a42f6