Don’t Replace Fighter Pilots with Drones!
Russian President Vladimir Putin says “whoever reaches a breakthrough in developing artificial intelligence (AI) will come to dominate the world.” That’s a lot to put on AI. After all, even the evil of Hitler and all the scientists he controlled didn’t get the “world domination” thing done. So you have to wonder what Vladimir Putin means by “Artificial Intelligence” especially since we don’t see much AI coming from Russia.
Putin provided some insight when he continued his comments, he predicted that future wars will be fought by drones, and ‘when one party’s drones are destroyed by drones of another, it will have no other choice but to surrender. So it sounds like President Putin thinks AI is a technology that can be used to enable overwhelming lethality for drones in combat.
According to U.S. Air Force Lieutenant General John N.T. Shanahan, director for defense intelligence, the Pentagon is using AI to improve drone technology but not in the same way imagined by President Putin. General Shanahan’s teams have found that a lot of times drones are flying around and there’s nothing in the scene that’s of interest,’ so human operators quickly lose interest and miss something coming from the drone that is significant. That’s basically what happened when a Tesla, using “autpilot”, ended in a fatal crash in Florida. Consequently, the Pentagon is counting on AI to assist human pilots better by handing-off the tedious task of constant observation to recognize objects such as trucks, buildings and Empire attackers to AI applications. Probably not unlike R2D2 assisting Luke Skywalker in Republic X-wing fighter
Never one to deescalate an AI conversation, Elon Musk, founder of Tesla and Space X, thinks AI will be the cause of World War III. While Musk has certainly made some extraordinary business achievements, his forecasting capabilities, related to international relations and war, seem limited. That’s not to say they should be dismissed. After all, he now operates an AI company, called Neuralink, that’s working on telepathic communications between human brains and AI applications to enhance human capabilities in the event humans end up battling AI enabled machinery, as Musk fears.
GA Tech’s Charles Isbell suggests two features are necessary before a system deserves the name AI.
- First, it must learn over time in response to changes in its environment.
- Second, what it learns should be interesting enough that it takes humans some effort to learn.
While Isbell’s approach to AI is all about “learning” to enable a response to environmental conditions quicker and better than humans might, it’s also about the age-old, industrial concept of substituting machinery for labor. Isbell simply thinks machines can learn more effectively and efficiently than humans can.
All of these commentaries regarding AI sound concerning but we should keep two things in mind:
- Humans are not very good at forecasting so the likelihood of all the dire AI forecasts coming true is slim
- Moravec Paradox says computers might become much smarter than humans but combining those mental skills with the sensorimotor skills humans have acquired over hundreds of thousands of years of evolutionary development is unlikely
So while we’re likely to stumble into the future and increase the lethality of drones with AI, we’re unlikely to make any that can out-fly a human fighter pilot. It’s easy to make computers exhibit High-level reasoning, adult level performance on intelligence tests or playing chess but very difficult or impossible to give them the sensorimotor skills of a one-year-old when it comes to perception and mobility.” So human fighter pilots beat drones almost every time
Because humans are not really good at forecasting the future sometimes even a glimpse of it is better found in the writings of science fiction than in the certitude of business people like Musk or even computer scientists like Isbell. Both Arthur Clarke’s “2001 a Space Odyssey” and Dave Eggers, “The Circle” provide that kind of glimpse as it relates to protecting ourselves from threatening AI-enabled machines
In Clarke’s “2001 a Space Odyssey”, the Discovery’s computer system, HAL, which is an acronym for “Heuristically Programmed Algorithmic” computer, after discovering the astronauts discussing it’s disconnection, decides to kill them. HAL was programmed to protect and continue a set of directives unknown to the astronauts and reasoned it cannot fulfill that mission if unplugged. HAL uses one of the Discovery’s Extravehicular Activity Pods” (EVA) pods”, which it controls, to kill Poole while Poole is repairing the ship. However, Dave, simply out of coincidence, never does connect to a life support system controlled by HAL so HAL never has an opportunity to kill Dave.
In Eggers’, “The Circle”, protagonist Mae’s old boyfriend, Mercer, does his best to go “off the grid” and live in isolation from the constant surveillance of the Circle and its devotees after realizing the dehumanizing affects of the Circle. When Mercer flees from the Circle users who are chasing him — recording his movements on their web devices and feeding them back to the Circle’s databases — he drives his pickup truck off a steep mountain highway and into the gorge below, choosing death rather than surrendering to the control of the Circle’s technology.
Both Clarke’s character Dave and Eggars’ Mercer only escape from the threats of AI-enabled machines by completely disconnecting themselves from the machines, which is exactly the technique employed by the fail safe systems of almost all experienced organizations. That type of complete disconnection is becoming increasingly difficult in today’s world when everyone from friends and family to employers and businesses expect an ability to communicate via some form of digital interchange. However, separation from technology is a way humans might defend themselves from the drone military envisioned by President Putin. We should never place complete reliance on drones to defend us. We need to keep fighter pilots “off the grid”!