Brett Zarmsky
Voices
Published in
5 min readMar 9, 2018

--

Artificial Intelligence — the Future of Warfare

Artificial intelligence (AI) in the military has the potential to change the world. However, it does not come without its fair share of controversy, which prevents it from easily doing so. Movies like the Terminator series caused military-grade AI to get a bad reputation, as they created the baseless fear that AI would take over the planet if they were to acquire the weapons of war — whether that be something as small as a gun or something as big and dangerous as a nuclear weapon.

Despite this, the US Military continues to spend billions of dollars on research regarding artificial intelligence. This further reinforces the fears previously mentioned.

This is not a truly justifiable fear though. According to the New York Times, the possibility of AI is governed by Moore’s Law. This law, created by Intel cofounder Gordon Moore in 1965, dictates that the amount of transistors that can be etched onto a small piece of silicon doubles roughly every two years. In short, this means that the power of circuit boards, a fundamental part to creating AI, doubles every two years, leading to exponential growth. However, this has a limit to it. Even if transistors were to become as small as only a few atoms, there would still be a limit to how many one can put onto a circuit board while also retaining the efficiency and reasonable size of it. This means that eventually the technology level of circuit boards will reach a cap, limiting the power of AI to something that humans will always be able to control.

AI can solve many of the issues that the military faces today, and the opportunity to use it in warfare should not be wasted and overlooked because of fears of the fabled “singularity” — the theoretical event in which AI becomes sentient and rebels against its human masters.

There are many modern examples of the applications that AI has in the military. Take drones, or unmanned air vehicles. These vehicles were vital in the wars in Iraq and Afghanistan. Although drones were not used as autonomous weapons, they were still used as autonomous surveillance systems. According to the Pentagon’s latest 25 year roadmap, In 2011 alone, the US Air Force gathered over 325,000 hours — 37 years — worth of drone video.

Northrop Grumman’s RQ-4 Global Hawk UAV, a surveillance drone commonly used in the US invasions of Iraq and Afghanistan

However, there is definitely room for improvement. The military does not have the manpower or resources to review all of the video that these drones collect. So, the military wants to create new AI that can review this data faster than any human could ever hope to do, which they hope to achieve in the near future.

The form of AI that the military wants to improve upon and develop for the future is autonomous weaponry. According to the United States Department of Defense, the definition of autonomous weaponry is “a weapon system(s) that, once activated, can select and engage targets without further intervention by a human operator.”

MAARS (Modular Advanced Armed Robotic System), an autonomous weapon currently being developed by QinetiQ

In essence, this means that the weapon is completely self-sufficient; it does not need a person to tell it what to do or operate it. The fact that it is completely autonomous is a major benefit for the military, as unlike human soldiers, these autonomous weapons systems can operate for long periods of time at peak efficiency without the need to eat, sleep, or drink. They also do not need to be paid or outfitted with expensive weapons and uniforms.

Aside from a practical benefit, they also have a moral benefit — if an autonomous weapon gets destroyed by the enemy, it is a lot less tragic than if a soldier gets killed in combat.

This autonomous weaponry does not have to necessarily fire a projectile or drop bombs to be effective. Cybersecurity and cyber warfare are growing concerns in the digital age, where the Internet is used on a daily basis by billions of people around the world. AI can alleviate some of these concerns. In a recent competition hosted by DARPA, a government agency developing AI, seven robots started to perform cyber warfare on each other by hacking each other and patching up their own flaws. These machines can revolutionize cyber warfare, as they can free up some time and resources for the US military or even police forces, who currently uses human hackers for cyber warfare and security.

DARPA’s Cyber Grand Challenge from August 2016 in Las Vegas

Some argue that using AI in warfare in unethical. According to NATO, “A primary concern is that allowing a machine to ‘decide’ to kill a human being undermines the value of human life.” Even though AI can decide when and when not to attack the enemy, it still has to follow many different algorithms and calculations to see if it is the best course of action to do so. Also, using these advanced weapons of war do not undermine the value of human life — in fact, they reinforce it. By using machines in war, militaries are using fewer humans — and with fewer humans, comes fewer casualties, and therefore less heartbreak, grief, and sadness.

Artificial intelligence has many useful applications in the military today, and will continue to show that it has more in the coming years. However, this can only happen if people support the use of AI in the military. People must set their fears of a science fiction scenario aside, and focus on the more real and positive aspects of AI — that is that it is quicker, more efficient, and more cost — effective compared to humans. It can also take dangerous or challenging tasks in the military and make them less so. The expansion of artificial intelligence in the military can revolutionize modern warfare, potentially as much as the nuclear weapon or even the gun did. But this can only happen with public support, time, and effort.

--

--