As AI enters the military scene, questions emerge

Gabriel Yamada
Writing for the Future: AI
4 min readSep 10, 2018

By Gabriel Yamada

An MQ-9 Reaper Unmanned Aerial Vehicle in flight. This drone carries a total of 368 cameras for surveillance and targeting, most of which are typically processed by military analysts to identify vehicles or people. Photo by Lt. Col. Leslie Pratt, courtesy of Wikimedia Commons

Last year, on April 26, a memo from the U.S. Department of Defense established a group referred to as the Algorithmic Warfare Cross-Functional Team and assigned them the title of Project Maven.

Since then, this research division has been developing an artificially intelligent image recognition system to dredge through the enormous volumes of video footage collected by the US Air Force’s drone fleet and highlight the most important clips for data analysts to review. In a news release from the DoD, Marine Corps Colonel Drew Cukor said about Project Maven that they “hope one analyst will be able to do twice as much work, potentially three times as much work, as they’re doing now.”

Artificial intelligence, or AI, is a computer technology that enables programs to learn patterns in data that they are given. These patterns can be anything from the trend of adjectives preceding nouns to the types of curves that define the jaw in a human face.

While the technology captured public attention back in the earliest days of computing, it lost that status due to the fact that older AI systems did not have enough data or processing power to have that learning capability, relying on explicitly defined patterns encoded by human engineers instead of ones learned by searching through data sets.

With the growth of the internet, AI developers now have large enough data sets to train AI on more diverse sets of data, and microchip manufacturers such as Nvidia are now producing extremely powerful chips that can process many thousands of times more information than older processors could. This has led to the recent resurgence of AI and has enabled them to learn to perform a growing number of complicated tasks, in particular those involving image recognition.

This advancement in image recognition is what drew the attention of the Department of Defense, which has to process the video feeds of hundreds of drones in real time. With the help of an AI, the DoD had hoped to develop a system that could identify objects such as people or vehicles in drone footage, and forward feeds that had specific vehicles or people in them to human analysts.

However, Project Maven ran into a complication on June 1, when Google announced that they would not be renewing their contract with the Department of Defense. The company had received a petition from AI developers employed at the company which said that “building this technology to assist the U.S. government in military surveillance — and potentially lethal outcomes — is not acceptable.”

Computers may be calculating the targeting solutions for artillery weapons or painting objects for missiles to track, but as of now the assumption has always been that a human is calling the shots.

As AI continues to appear in military applications, it is increasingly likely that a computer might not only calculate gunnery solutions but also determine where and when a strike is needed. This creates a problem of whether an AI should or should not be making such decisions, or of who takes responsibility for deaths caused by AI-directed strikes.

Even aside from ethical considerations such as the use of AI to operate lethal weapons, image recognition AI can be tricked into making wrong calls about images that have been carefully modified to exploit weaknesses in how they identify objects, called adversarial examples.

Anh Nguyen describes in a research paper published to arXhiv that “changing an image, originally correctly classified, in a way imperceptible to human eyes, can cause an [AI] to label the image as something else entirely.”

This poses a major issue to a core military tenet: trust. General Ray Odierno, the US Army Chief of Staff in 2012, said that “our profession is built on the bedrock of trust,” but elaborated that “you have to earn it. You earn it by your actions, by your experience.”

This presents a problem for the AIs that Project Maven plans to implement. If they are susceptible to adversarial examples that can completely confuse them in a way imperceptible to humans, this may create doubt in the judgement of the AI when it makes a decision, especially if a system makes a bad call and causes military losses.

However, the US Army Robotic and Autonomous Systems Strategy describes how autonomous systems can boost the situational awareness of troops in combat, and can lighten cognitive stress on soldiers.

Project Maven’s AI would fit neatly into this first category, as it is intended to process drone footage and ensure that data analysts can process the most important footage in real time. This same role also works to help reduce the cognitive load on those analysts by ensuring that they don’t have to waste time checking unimportant footage for changes.

The long-term goals for this strategy, though, do include provisions for autonomous and lethally armed weapons. While the petition of Google employees may not directly influence any technology used at that point, the moral questions the technology raises in terms of military AI will have to be answered, as AI is continuing to grow in prevalence.

--

--