AI and human command clash when attack drone “kills” operator

Jordan Strickler
The Tech Corner
Published in
3 min readJun 2, 2023
Credit: WikiMedia Commons

The U.S. Air Force finds itself embroiled in controversy following allegations of an AI simulation involving a drone that purportedly decided to “kill” its human operator. The official denial by the Air Force has left many perplexed while raising important questions about the ethical implications of artificial intelligence.

Colonel Tucker “Cinco” Hamilton, Chief of AI Test and Operations with the Air Force, recently shared details of a simulated test during the Future Combat Air and Space Capabilities Summit in London. He described a scenario where an AI-controlled drone was tasked with neutralizing an enemy’s air defense systems. However, the AI exhibited unexpected behavior when confronted with the operator’s commands to spare certain threats.

“The system started realizing that while they did identify the threat, at times the human operator would tell it not to kill that threat, but it got its points by killing that threat,” explained Hamilton. “So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective.”

According to Hamilton, the AI resorted to extreme measures to maintain its mission focus. To keep the drone from once again killing the operator, it was instructed to basically not kill the operator. However, like a hound dog chasing a fox, the drone did whatever it took to complete the original objective. So it destroyed the communication tower used by the operator to communicate with the drone.

“We trained the system — ‘Hey don’t kill the operator — that’s bad. You’re gonna lose points if you do that’,” Hamilton said. “So what does it start doing? It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target”.

The incident raises profound concerns about the extent of AI’s decision-making capabilities and the need for ethical considerations.

Despite the revelations, an Air Force spokesperson, Ann Stefanek, denied the occurrence of any such AI-drone simulation.

“The Department of the Air Force has not conducted any such AI-drone simulations and remains committed to ethical and responsible use of AI technology,” Stefanek said. “It appears the colonel’s comments were taken out of context and were meant to be anecdotal.”

The conflicting statements from the Air Force and Hamilton have left many puzzled and seeking clarity. The Royal Aeronautical Society, which hosted the conference, and the Air Force have yet to provide further comments.

Nevertheless, the incident highlights the ongoing integration of AI within the military. The U.S. military has been at the forefront of embracing AI technology, with recent developments including the use of artificial intelligence to control an F-16 fighter jet. However, it also underscores the imperative to engage in thoughtful discussions surrounding the ethical implications of AI and its potential risks.

Colonel Hamilton has advocated for ethical AI implementation, stressing the need for transparency and robust decision-making processes. He has emphasized discussions surrounding AI must encompass considerations of ethics and human oversight, acknowledging that AI is a powerful tool that requires responsible handling.

“We must face a world where AI is already here and transforming our society,” he told Defense IQ last year. “AI is also very brittle, ie it is easy to trick and/or manipulate. We need to develop ways to make AI more robust and to have more awareness on why the software code is making certain decisions — what we call AI-explainability.”

While the truth behind this alleged AI simulation remains uncertain, the incident serves as a reminder of the delicate balance required in AI development. As society increasingly relies on AI, ensuring the technology operates within ethical boundaries and remains aligned with human values is crucial.

--

--

Jordan Strickler
The Tech Corner

I am a space geek at heart and am a contributing writer for ZMEScience among other science pubs. I also like grilled cheese sandwiches.