The Looming Dangers of Artificial Intelligence (AI)
In the 1968 film 2001 Space Odyssey, HAL, a sentient computer, goes rogue and attacks its crew. In the 1983 film The Terminator, a computer system called Skynet becomes self-aware and attempts to destroy the human race through nuclear war. When The Terminator was released, artificial intelligence (AI) was in its infancy. That has changed. Situations depicting AI-powered machines taking over now appear all too plausible and lead us to wonder whether AI is making a useful contribution or runs the risk of endangering humanity.
What is Artificial Intelligence?
In a 2018 paper for the Brookings Institution, Darrell West and John Allen identify three qualities integral to artificial intelligence: intentionality, intelligence, and adaptability.
“Each of these AI features has the potential to move civilization forward in progressive ways. But without adequate safeguards or the incorporation of ethical considerations, the AI utopia can quickly turn into dystopia.”
Intentionality: Unlike passive machines programmed to respond in a specific way, AI algorithms are designed by humans with intentionality — meaning that they act with purpose. They are able to gather information from sensors, remote inputs and digital data. They analyze this information instantaneously to reach conclusions. Self-driving cars, for example, use light detection and ranging to “see” around the vehicle, analyzing whether there are any dangerous conditions that would require changing lanes. They feed this information to an onboard computer which decides what to do in real time.
Intelligence: AI utilizes machine learning and data analytics to make intelligent decisions. For example, AI can be used to assign students to specific schools in a district with many choices. This requires software designers to design the AI to use agreed-upon criteria that concur with the community’s values, which may include justice, equity, efficiency and effectiveness. For these reasons, AI has to balance competing interests and reach intelligent decisions that reflect values important in that particular community. If this is not done, then AI can make decisions that are unfair and discriminatory.
Adaptability: AI systems must have the ability to learn and adapt as they compile information and make decisions. Effective artificial intelligence must adjust as circumstances or conditions shift. This may involve alterations in financial situations, road conditions, environmental considerations, or military circumstances. AI must integrate these changes in its algorithms and make decisions on how to adapt to the new possibilities.
On the surface, all of this sounds positive. But even with all the benefits, an increasing number of experts are sounding the alarm to AI’s potential dangers.
Does AI Pose a Risk to Humanity?
A 2018 Quartz article by Alex McKinnon states: “No longer are we afraid of aliens taking our freedom: It’s the technology we’re building on our own turf we should be worried about. It is up to us to protect ourselves from an AI that is already superior to humans on a number of fronts. AI processes information much faster than we will ever be able to. AI never tires. AI never ‘dies.”
Most importantly: AI can direct itself to continually learn and improve without any subsequent direction or input from humans.
Ed Gent, a freelance science and technology writer based in Bangalore, India writing in SingularityHub in 2017, states that “the concept of ‘recursive self-improvement’ is at the heart of AI … how we could rapidly go from moderately smart machines to AI superintelligence. The idea is that as AI gets more powerful, it can start modifying itself to boost its capabilities. As it makes itself smarter it gets even better at making itself smarter, so this quickly leads to exponential growth in its intelligence.”
Elon Musk, the founder of SpaceX, Tesla and co-founder of Neuralink, is alarmed. He recently characterized AI as a species-level threat to humanity. To Musk, AI could mean the end of humans as the planet’s dominant species. How concerned should we be?
The AI intelligence explosion will occur when machine brains surpass human brains. Musk thinks this will occur sometime in the 21st-century. Many AI developers believe we can handle AI and stop it from controlling us. Musk differs: “it scares the hell out of me and I’m very close to it. It is vastly more powerful than anybody thinks it is.”
Jay Tuck, a US defense expert and journalist, is also concerned. “AI is software that writes itself, writes its own updates at speeds beyond our capacity, and improves itself. It has already surpassed our capabilities in many areas of society, including the trading of securities.
High frequency computers now trade securities so quickly that a Frankfurt trading firm moved 5 blocks closer to the stock exchange in order to receive information travelling down glass cables at the speed of light. They beat out their competitors by about 20 billionths of a second, which gives them a trading advantage. Tuck fears that “an AI machine is not created to do what we want it to do — it does what it learns to do.”
The Swedish philosopher Nick Bostrom imagines a machine, structurally similar to a brain, designed to act as an intelligent agent. In time, Bostrom conjectures, it should surpass our intellectual capacity in virtually every field. This super-intelligent entity would exceed the intellect of all of humanity. As AI improves itself at a pace well beyond human speed, we could lose the capacity to check its advance. Controlling AI self-learning may become impossible.
AI vs. the Human Mind
In recent years, Google’s AI system Deep Mind defeated both the chess and Go world champions. In chess, it examined every single possibility before moving a piece. But in the game of Go, there are a far larger number of moves possible at any moment. Some say that in GO, there are more possible moves than atoms in the universe. Deep Mind could not therefore look at every possible move. It had to figure out how to win and it did. It played against itself and amassed a repertoire of moves. Then it went on to beat the world champions.
Deep Mind has administrator-level access to Google’s database, which stores and updates vast amounts of data, including financial and personal information for practically everybody on the planet.
AI is a new creation that does not understand humanity and may be unconcerned about our welfare. If Deep Mind accesses that information, Elon Musk warns, “we are headed toward superintelligence or civilization ending.”
Do We Have a Problem?
In a 2014 article, Alan Winfield, an electrical engineering professor at the University of West England points out that there are lots of ifs in a doomsday scenario: “If AI is able to fully understand itself and if that super-AI accidentally or maliciously goes on to produce hyper-intelligent machines, and if these machines start to consume resources and if we fail to pull the plug, then, yes, we may have a problem…the risk, while not impossible, is improbable.”
Does Winfield’s argument comfort you about the risks a super-intelligent AI poses? I don’t know about you, but it makes me uncomfortable. Perhaps it’s the phrase: “We may have a problem.”