Lethal Autonomous Weapons & Info-Wars: A Scientist’s Warning

Synced
SyncedReview
Published in
7 min readJul 6, 2017

Imagine being hunted down by a swarm of flying terror-bots; or watching helplessly as your misidentified battalion gets razed by allied machine gun fire. Such robo-apocalypse scenarios involving autonomous weapons and artificial intelligence gone wrong are no longer science fiction — and they are grave concerns for Dr. Stuart Russell. A renowned UC Berkeley professor of 30 years, Russell is a co-author of the seminal textbook Artificial Intelligence: A Modern Approach.

In 2015, Russell addressed the United Nations’ Convention on Certain Conventional Weapons (CCW), advising policy makers on the banning of Lethal Autonomous Weapons (LAWS). He has taken a leading role among AI scientists in promoting understanding regarding the power of AI in weapons of mass destruction (WMDs). To this end, Russell initiated an open letter from AI and robotics researchers against LAWS, garnering signatories from 3,105 researchers and 17,701 others, including well-known names such as Elon Musk and Steven Hawking.

This spring, Synced sat down with Russell at the Global AI for Good Summit in Geneva.

Polite and soft-spoken, Russell nonetheless expresses a strong sense of urgency regarding the current state of AI application and its potential to harm rather than help humans. While their colleagues in nuclear physics, biology and medicine have long had to deal with moral issues, computer scientists thus far have not.

Autonomous Weapons: Humans Out of the Loop

One might ask: Wouldn’t robots terminating other robots be morally preferable to pitting humans against humans in mortal combat? But the question is not as simple as it seems.

Autonomous weapons are systems that “locate, select, and eliminate human targets without human intervention,” explains Russell. The idea of taking humans “out of the loop” presents two issues: 1) how can an autonomous agent comply with International Humanitarian Law [1] if there is no “meaningful human-control” involved? 2) In case of error, who would be there to correct the system in a window comprising mere milliseconds of operation time?

According to the Autonomous Weapons Operational Risk report published by the Center for a New American Security in 2016, “predicting the [weapon] system’s behaviour, particularly in complex and unstructured real-world environments, can be challenging.” CNAS also notes the deadly effects of engaging with “inappropriate targets,” which can result in “fratricide, civilian casualties, or unintended escalation in a crisis.” Once they start, unsupervised autonomous robots programmed for long-range killing can be very difficult to stop.

Automatic, automated, autonomous, and intelligent agents have very different capabilities, the more self-reliant systems are more complex and difficult to correct in case of malfunction.

Looking beyond battlefields and to the skies above urban centres, Russell says “humans are defenceless, even against micro-UAVs, and mass-produced micro-UAVs are scalable WMDs. They cost US$20-$50 per unit, and one truckload is enough to wipe out a medium-sized city.”

Currently most of the world’s largest arms-producing countries have government labs dedicated to developing autonomous weapons. The US alone spent $149 million on autonomy research in 2015, with the Defense Advanced Research Projects Agency (DARPA) funding close to 30 research projects. There is also significant support from arms manufacturers such as Boeing, General Dynamics, and Lockheed Martin.

DARPA is credited with most advancements in basic AI research over the past 60 years, pouring funding into universities such as Stanford, MIT, CMU, and Caltech. Even ostensibly benign research projects often received DARPA funding. A recent DARPA funded project to make it into the public spotlight is Big Dog from Boston Dynamics, which founder Marc Raibert built on CMU and MIT Leg Laboratory research.

While Russell says Boston Dynamics is an “impressive company,” he stresses that it has “not published a single paper on its research results…there’s nothing in published literature and I don’t think the patents are available.” This runs against the AI research community’s sharing protocol.

DARPA’s Legged Squad Support System (LS3) robot MASTIFF — similar to Boston Dynamic’s Big Dog.

Asked if he feels conflicted by indirectly receiving military funding, Russell says, “I don’t have a problem with defence, I have a problem with attack … there are plenty of uses of AI in military circumstances that are reasonable, protective, defensive, that bring about more situational awareness, which is quite different from autonomous weapons.”

Autonomous weapon technology is an entire ecosystem, comprising industry (robotics, aerospace, automotive, ICT, and surveillance), academia, and government research labs. Top American universities tactfully avoid working on weapons projects. The Stockholm International Peace Research Institute in a recent report on autonomous weapons notes that “Some universities have internal rules that specifically limit their ability to participate in weapon development. MIT, for instance, allows researchers to receive military funding only for basic and applied research.”

For Russell, simply avoiding weapons research is too passive. Basic AI technologies like computer vision, natural language processing, machine learning, human-machine interaction, and collaborative intelligence (SWARM) are also used in weaponry. “To pretend science is morally neutral, and that you can develop anything and have clean hands because it is someone else using it to kill people is extremely naive, and scientists should know that.”

Social Concerns: Computer Scientists in the Loop

“The computer scientist community has not been interested in policy and ethical questions. I think biology researchers are decades ahead of us, partly because of their direct interface with medicine,” says Russell. “Medicine has been thinking about these issues for thousands of years. I fully accept there are policy issues I don’t understand.”

One problem with AI technology is its “irreversibility.” If a terrorist organization or authoritarian regime hacks, spoofs, or behaviourally manipulates autonomous systems, the results could be disastrous. Russell uses the analogy of nuclear energy: “If we don’t do things properly, as the technology becomes more powerful, we’ll make very serious mistakes as we did with nuclear energy. We are lucky not to have had a nuclear war. Sometimes we hear the complacency — ‘the nuclear thing turned out pretty well’. I think it’s sort of like putting a blindfold on and crossing the freeway without getting killed, and then thinking you did something smart rather than just having a series of lucky escapes.”

Scientists need to be in the loop of discussion, by default they are the ones who understand the technology better than anyone else. After physicist Leó Szilárd had actualized a nuclear chain reaction in 1939, he wrote:

“We turned the switch, we saw the flashes, we watched them for about ten minutes — and then we switched everything off and went home. That night I knew the world was headed for sorrow…”

Dr. Norman Hilberry (left) and Dr. Leó Szilárd at the site where the world’s first nuclear reactor was built during World War II. — Image Retrieved via Wikicommons

Russell sees a parallel with nuclear weapons as we hit the same threshold with AI. “When Frederick Soddy, the Nobel Prize winner who discovered isotopes, first warned in 1915 that one day we will be able to create atomic bombs with a destructive power that will be unbelievable, he was not taken very seriously. People thought it was just science fiction. And you see the same denial in the AI community now. The AI community spent 60 years telling skeptics that human level intelligence was possible. Now as it starts to look like it might be possible, a lot of them are saying it’s never going to happen, there’s no risk.”

“There are more than 20 different kinds of arguments as to why we shouldn’t be paying attention to the risks of AI, and as far as I can tell they’re all spurious.”

21st Century Info-Wars: The Fake News Loop

Another of Russell’s concerns is ‘info-wars’. While not as dramatic as marauding robots, targeted marketing and weaponized political propaganda can also have a serious social impact. For example, a targeted marketing bot might peddle alcohol to people whose social media accounts suggest may be depressed. Meanwhile, political propaganda and fake news can be used to misinform or incite populations.

Unlike lethal autonomous weapons, issues involving information flow and privacy protection are often only identified in hindsight, and preventing them requires broad cross-sector policy making.

“Information warfare raises some difficult questions,” says Russell. “People have the right to protect themselves against mental invasions. But who will decide what information is true or false? We need authentication technology for this: when a document is attributed to someone saying anything we can trace when and where they said it.”

“Today generating false attribution is a perfectly feasible technology. I was just reading an article about people expressing concerns about AI, and the quotation that was attributed to me, I never said.”

The media plays a role in exacerbating the situation. “I don’t think the media is particularly helpful because the stories they choose to print tend to be the most extreme. If your opinion is not extreme enough, they will convert it into an extreme opinion for you. When I read about AI, 80% of the time it’s just flat out wrong information.”

When we concluded our interview, we reassured Dr. Russell regarding our own professional ethics. On today’s digital media landscape, it takes conscious efforts not to exaggerate stories or write clickbait headlines.

Russell finished with a warning: “If people try to cover up the risks associated with AI, or speak off the top of their heads about AI, we can’t trust them.”

[1] International Humanitarian Law requires combatants to: 1) discriminate between combatants and non-combatants; 2) assess military necessity of an attack; 3) Judge proportionality of collateral damage to value of military objective.

Journalist: Meghan Han | Editor: Michael Sarazen

--

--

Synced
SyncedReview

AI Technology & Industry Review — syncedreview.com | Newsletter: http://bit.ly/2IYL6Y2 | Share My Research http://bit.ly/2TrUPMI | Twitter: @Synced_Global