Artificial Intelligence- Is there Reason to be Fearful?
“The development of full artificial intelligence could spell the end of the human race.” Stephen Hawking
If three of the keenest technology minds today are caution humankind writ large about the future developments of artificial intelligence (AI), should we be worried? The most recent edition of Economist looks at this issue.
The rapid pursuit of AI through artificial means is no longer one of science fiction novelettes and comic books. The scientific pursuit of super intelligent machines- machines with a capacity for developing general intelligence far greater than that of humankind- is already underway. Advanced AI- the point at which machine learning exceeds the human capacity for knowledge- could lead to unrestricted exponential advancements in technologies and a deepening of the field of AI. But therein lies the dilemma- once advanced AI has achieved a point beyond the human baseline; is there any way to continue to influence subsequent advancements- is there a use for humankind any longer?
The pursuit of AI has been an endeavor of the scientific community since before the development of Deep Blue, its roots drawing from Dartmouth College in 1956. The initial attempt of researchers was to discover how machines use language to solve problems otherwise reserved for human reasoning. Within the coming decades, computer programs would successfully compose improvisational music pieces, write poetry, and outperform doctors in specific medical diagnostic exercises- all via original thought. The genesis of these works is the programmer, to be sure, but the origin of creativity and execution is the machine. These advancements are at this point benign but may be a portent of a more dire future.
Just as machines have learned the skills of music, art, and medicine and employed creative license in their execution of the work, at the brink of the precipice of super intelligent machines is a world where machines will use learned behaviors and skill to execute in a capacity far faster, more efficient, and more accurate than humans could ever achieve.
As noted by Nick Bostrom in his work, Superintelligence, “Equally important is the fact that [a machine] is not restricted to random mutations. If [the machine] can trace a cause for some weakness it can probably think of the kind of mutation which will improve it.” This is the crux of the theory of continuing and exponentially rapid self-improvement, which ostensibly will create the sharp upward curve accelerating super intelligent machines beyond human cognitive capacity.
It may take the collective effort of humankind across technology fields, academia, social groups, and across the world to appropriately plan for the likely outcomes of this pursuit. Although AI has been a technological holy grail since the 1940's and 1950's, once we realize the gravity of what we are creating, it may be too late.
When AI decides it hasn't a need for humankind any longer, will humankind be ready?