The fear and regulation of Artificial Intelligence
By Siddharth Singh, 15th July 2015
In the last several months, Elon Musk and Professor Stephen Hawking — two thought leaders in science and technology — have voiced their fear of Artificial Intelligence (AI). This article on Edge.org hosts a list of other intellectuals with similar views (one article was ominously titled, “Fearing Bombs That Can Pick Whom to Kill”).
This Edge.org article — titled ‘The Myth of AI’ — provides the view of Jaron Lanier, a computer science philosopher, who brings in significant nuance to this debate in the form of a counter view. He dismisses the idea that AI will become smarter than humans as a “mythology”. He compares it to the fear mongering that other technologies bought in the past. He even goes to the extent of calling AI a “fake thing” and a “fraud” owing to the way it is defined and discussed. He states,
“To my mind, the mythology around AI is a re-creation of some of the traditional ideas about religion, but applied to the technical world. All of the damages are essentially mirror images of old damages that religion has brought to science in the past. There’s an anticipation of a threshold, an end of days. This thing we call artificial intelligence, or a new kind of personhood… If it were to come into existence it would soon gain all power, supreme power, and exceed people.
The notion of this particular threshold — which is sometimes called the singularity, or super-intelligence, or all sorts of different terms in different periods — is similar to divinity. Not all ideas about divinity, but a certain kind of superstitious idea about divinity, that there’s this entity that will run the world, that maybe you can pray to, maybe you can influence, but it runs the world, and you should be in terrified awe of it.
That particular idea has been dysfunctional in human history. It’s dysfunctional now, in distorting our relationship to our technology. It’s been dysfunctional in the past in exactly the same way. Only the words have changed.”
The article provides the explanation of his view at length, apart from additional views of experts in the field. For instance, science historian George Dyson adds,
“The brain (of a human or of a fruit fly) is not a digital computer, and intelligence is not an algorithm. The difficulty of turning this around, despite some initial optimism, and achieving even fruit fly level intelligence with algorithms running on digital computers should have put this fear to rest by now. Listen to Jaron, and relax.”
Professor Steven Pinker of Harvard University writes,
“The other problem with AI dystopias is that they project a parochial alpha-male psychology onto the concept of intelligence. Even if we did have superhumanly intelligent robots, why would they want to depose their masters, massacre bystanders, or take over the world? Intelligence is the ability to deploy novel means to attain a goal, but the goals are extraneous to the intelligence itself (…) It’s telling that many of our techno-prophets can’t entertain the possibility that artificial intelligence will naturally develop along female lines: fully capable of solving problems, but with no burning desire to annihilate innocents or dominate the civilization.”
While the fears of AI may be overblown, a regulation of AI will emerge on the radar of policy makers as long as these fears persist. However, researchers have pointed out to the difficulty in such regulation.
Dr. John Danaher a lecturer of law at NUI Galway (Ireland), has written about eight broad problems that will emerge while regulating AI (the article is available on the website of The Institute for Ethics and Emerging Technologies, which purports to promote “ideas about how technological progress can increase freedom, happiness, and human flourishing in democratic societies”). He builds on a paper titled ‘Regulating Artificial Intelligence Systems: Risks, Challenges, Competencies, And Strategies’ by Matthew Scherer, Law Clerk at the Washington Supreme Court, which discusses public risks associated with AI and the competencies of government institutions in managing such risks. (The paper is available here).
The eight regulatory problems with AI as mentioned in the John Danaher article are:
1. The definitional problem: “…The problem is that the term ‘artificial intelligence’ admits of no easy definition.” (The Matthew Scherer paper has an interesting discussion on the lack of a widely accepted definition of AI).
2. The discreetness problem — 1: “AI projects could be developed without the large scale, integrated institutional frameworks needed by most 20th Century industrial institutions”.
3. The difuseness problem: “AI projects can be developed by a diffuse set of actors operating in a diffuse set of locations and jurisdictions.”
4. The discreentness problem — 2: “AI projects will … make use of discrete technologies and components, the full potential of which will not be apparent until the components come together.”
5. The opacity problem: “The technologies underlying AI will tend to be opaque to most potential regulators.”
6. The foreseeability problem: “AI can be autonomous and operate in ways that are unforeseeable by the original programmers. This will give rise to a potential ‘liability gap’”
7. The narrow control problem: “An AI could operate in ways that are no longer under the control of those who are legally responsible for it.”
8. The general control problem: “An AI could elude the control of all human beings.”
The age of AI is only beginning and we will be hearing a lot more on these issues in the near future.
Post script: Nate Silver discusses how the human mind often misidentifies the performance of AI as “creativity”. He writes,
“We should probably not describe the computer as “creative” for finding the moves; instead, it did so more through the brute force of its calculation speed. But it also had another advantage: it did not let its hang-ups about the right way to play chess get in the way of identifying the right move in those particular circumstances. For a human player, this would have required the creativity and confidence to see beyond the conventional thinking.”
Point taken.