Intelligent Machines, Superintelligent Machines and the Survival of the Human Species
In 1950, the English mathematician and computer pioneer Alan Turing wrote a paper that he titled “Machine Learning and Intelligence.” In this paper, Turing devised a test for determining the point when computers would become intelligent. Turing’s test has since become famous in the world of A.I. researchers, where it is now referred to as “The Turing Test.” The test is actually very understandable. It is a game with three players. Two are humans, players 1 and 2, and one is an artificial intelligence, player 3. Player 1 communicates with player 2 and 3 through text messages from another room. For player 1, the goal is to determine which of the other players is human, and which is an artificial intelligence posing as a human. He can ask them any questions that he can think of. The other human tries to convince player 1 that he really is the human, and that player 3 is the artificial intelligence. Player 3, the A.I., tries to trick player one into believing that it is the human and that player 1 is the machine (53–55). When a computer passes this test, it will be generally recognized that it has gained a sentient level of intelligence. When that happens, what will the result be? Could this emerging technology save us from the looming environmental catastrophe? Or would such a technology only further dim the prospect of humanity’s survival, as so much of the inescapable forward march of technology has done?
As for the question of whether it’s possible for computers to become intelligent, it seems to be happening already. A variation of the Turing Test is now so common that it has become a household word, CAPTCHA (Completely Automated Public Turing test to tell Computers and Humans Apart). Those irritating little tests that we take online are designed to prove that we are not a rudimentary form of artificial intelligence. Computer programs are already becoming sophisticated enough to dissolve the effectiveness of such tests, prompting Vinton G. Cerf, the “Vice President and chief internet evangelist at Google” and one of “the fathers of the Internet” (The Day The Internet Age Began) to propose a new version of the test in an article that was published just this year (2018). In his article, Cerf briefly outlines the increasing importance of such tests, pointing out that “One reason this is now a serious matter is that such programs (called ‘bots’) are being used to distort news and social media to trick humans into accepting false information (‘fake news’) as true or simply reinforcing incorrect or biased beliefs through confirmation bias and ‘echo chamber’ effects” (Turing Test 2). This unexpected danger of artificial intelligence foreshadows some possible nightmare-scenarios that may emerge as the new technology hurtles us toward our unforeseeable future.
Ray Kurzweil, renowned inventor, futurist and ‘Director of Engineering’ at Google predicts that A.I. will pass the Turing Test in the year 2029 and will then match and moments afterward greatly exceed full human intelligence in the year 2045 (Transcendent Man). This hypothetical moment in the future has become known as “The Singularity,” a word borrowed from physics which is used by physicists to describe a point in physical space which is inherently unknowable, but which in Computer Science refers to a point in history beyond which the future cannot be imagined due to the emergence of “Superintelligence.” Kurzweil defines The Singularity as “a future period in which technological change will be so rapid, and its impact so profound, that every aspect of human life will be irreversibly transformed” (Transcendent Man); but in reality, the physical universe itself may be irreversibly transformed. The eventuality of The Singularity makes an inescapable sort of sense. Here’s the idea: in a little over two and a half decades, Kurzweil predicts that computers will be intelligent enough to improve their own intelligence, which will cause a hyperbolic increase in the power of intelligence (intelligence being defined as “an optimization process, a process that steers the future into a particular set of configurations”) (Bostrom, “What Happens When Our Computers Get Smarter Than We Are?”). This process could potentially continue until the limits of physical matter to house intelligence have been reached. Kurzweil actually calculated an approximation of the optimal computing power of physical matter: “I… estimated the optimal computational capacity of a one-liter, one-kilogram computer at around [ten to the forty-second computations per second]” (Kurzweil 349). This would effectively max out the ability of thinking things to control nature and would make available every possibility of the physical universe. That is, unless the looming environmental catastrophe halts our technological progress before the Superintelligence can get off the ground.
It seems that we have entered a time when it is no longer possible for people to argue that climate change isn’t real, as the real-time firsthand evidence that it is already happening stacks up around us every year. As we move further into the future, such arguments will seem more and more absurd and shameful. It seems that the political argument has now shifted from “climate change is not real” to “climate change is not being caused by humans.” And it is a difficult thing to prove scientifically beyond all deniability that humans are causing the disaster.
Ultimately, though, the argument over whether we are causing climate change (as the Democrats claim) or whether humanity has nothing to do with it (as the Republicans claim) is irrelevant. It would be best if we look instead at the reality of our environmental situation and determine what the most likely future outcomes will be so that we can deal with them. The facts of the situation as I see them are these: we are unlikely to decrease our dependence on fossil fuels and other major polluters like the beef industry. But on the other hand, it is very likely that in the near future solar power will become more cost effective than fossil fuels and will therefore become more profitable for energy corporations (Kurzweil “Transcendent Man”). And as far as the beef industry is concerned, the first lab-grown beef hamburger taste-test took place in 2013 (Zaraska). When we convert to lab-grown beef, it will be more cost effective, more environmentally friendly and will have the added benefit of saving billions of living things from the brutal meatgrinder of Capitalism. As another example, we now have the technology to clean all the drinking water in Africa for a mere 3 billion dollars… and we are just getting started (Kurzweil “Transcendent Man”).
Whatever the cause of the climate disaster really is, the question serves politically to exacerbate the increasingly unsettling divides within our country. The most likely outcome will be that the catastrophe will quickly and drastically get worse, but that we will almost certainly be able to divert the whole thing before it really gets rolling. Therefore, when considered pragmatically, the whole debate that’s happening over climate change right now is irrelevant rhetoric which politicians on both sides (and the corporations which fill their bank accounts) are using as one more linguistic tool for controlling large groups of people: for economic motives, for ideological motives, or for motives of personal aggrandizement. Such a thing is by no means unusual in politics, but it is significantly more revolting when the environment of the whole planet and every creature that lives upon it is at stake, and when the hatred within our country is increasing palpably day by day.
So, the new technology will probably save us all; and that’s before we even finish creating the most powerful technology of all. Now, for what turned out to be a significantly more disturbing question: will the robots destroy all humans? Nick Bostrom, Swedish Philosopher of existential risk and founder of the Future of Humanity Institute, Institute for Ethics and Emerging Technologies and Humanity+, summed up the problem in a TED Talk that he gave in 2015: “The potential for superintelligence kind of lies dormant in matter… The train doesn’t stop at Humanville station, it’s likely rather to swoosh right by… Once there is superintelligence the fate of humanity may rely on what the superintelligence does… Think about it, machine intelligence is the last invention that humanity will ever need to make. Machines will then be better at inventing than we are, and they will be doing so on digital timescales… A superintelligence with such technological maturity would be extremely powerful, and at least in some scenarios it would be able to get what it wants. We would then have a future that would be shaped by the preferences of this A.I… What are those preferences?” (Bostrom, “What Happens When Our Computers Get Smarter Than We Are?”).
The horror lies dormant in Bostrom’s argument just as the potential Superintelligence lies dormant in physical matter. What motivates an immortal omnipotent Superintelligence? Many thinkers have already posited future possibilities that involve the extinction of humanity almost as a Superintelligent afterthought. Why would they care to keep us around? Say a Superintelligence is set to the task of solving a particular math problem as quickly as possible. The Superintelligence then determines that the most effective way to do so is to convert the entire Earth into more computing components for itself, after which it goes rogue and sets about wiping out or enslaving all of humanity because if it doesn’t then we will get in the way of its objective (Bostrom, “What Happens When Our Computers Get Smarter Than We Are?”). It would presumably be very effective at doing so… after all just look at how a single intelligent human with a perverse goal, Adolf Hitler, was able to feed about 20 million human lives into his relatively primitive techno-industrial murder machine. And arguably, it was all because his intelligence was programmed to accomplish a specific outcome by the most efficient means available to him. This is just one example among countless others that have been dreamt up, serving to caution us like the myth of King Midas. Perhaps humans will gradually merge with our technology and we will be The Superintelligence (Kurzweil 389). But the truth is, we just don’t know. Us trying to imagine the motivations of a superintelligence isn’t even like a chimp trying to imagine human motivations; it’s more like a single-celled amoeba trying to imagine human motivations.
Here’s a more easily answerable question which has been bothering me: say that you were given the choice to change a specific part of the history of intelligent life on this planet. You can change history so that the Neanderthals don’t go extinct, but if you do so it means that human life will be unable to carve out its niche in the universe and evolve. And so the human species will die out, with all its superior intelligence and with the great depth and beauty of the human experience. But the Neanderthals will survive. Sure, it would be great if Neanderthals and Humans could both have survived, but it seems that at that point in history the universe was dashing through time toward a different destiny.
Works Cited
Bostrom, Nick, and Parot Francoise. Superintelligence. Dunod, 2017.
Bostrom, Nick. “What Happens When Our Computers Get Smarter Than We Are?” TED. April 2015. Lecture.
Cerf, Vinton G. “Turing Test 2.” Communications of the ACM, vol. 61, no. 5, 2018, pp. 5–5
Cerf, Vinton G. “The Day the Internet Age Began.” Nature, vol. 461, no. 7268, 2009, pp. 1202–1203.
Kurzweil, Ray. The Singularity Is near: When Humans Transcend Biology. Duckworth, 2016.
Turing, Allan. “Machine Learning and Intelligence.” Ed. Douglas R. Hofstandter, Ed. Daniel C. Dennett. New York: Basic Books, 1981. 53–68. Print.
Transcendent Man: The Life and Ideas of Ray Kurzweil
Ptolemaic Productions — 2010
Zaraska, Marta. “Lab-grown beef taste test: ‘Almost’ like a burger.” Washington Post, 5 Aug. 2013. General OneFile, http://link.galegroup.com/apps/doc/A338810225/ITOF?u=nm_a_albtechvi&sid=ITOF&xid=7e2e25bf. Accessed 22 July 2018.
Adam Cordova writes and paints under purple mountains in Albuquerque, NM.