AI and World War 3
How autonomous weapons could both change or engage future wars
It is said that there have been two great revolutions in warfare: the first was gunpowder discovered in the 9th century by Chinese Taoists looking for immortality. The second was the discovery of the hydrogen bomb in 1949. The third will be artificial intelligence capable of seeking and killing specific people, subduing populations, and changing targets and techniques much faster than we ever could. It will evolve into a system so efficient at conjuring quick and devastating attacks that the world is currently in a race to become the first country to develop this technology and implement it.
This international competition for AI could be one of the causes for a third world war and it shows no signs of slowing down. One of the problems with halting the development of these new weapons is that even if one country decides it’s unethical, other countries won’t necessarily come to that same conclusion. AI, like chips, will transform everything it touches. It can give the country that implements it a dominant hand in world politics and affairs. This is why despite an open letter being signed by thousands of researchers, executives, and scientists (including Elon Musk and Stephen Hawking) asking to ban autonomous weapons, all 10 military powers have rejected the plea.
“If any major military power pushes ahead with AI weapon development, a global arms race is virtually inevitable, and the endpoint of this technological trajectory is obvious: autonomous weapons will become the Kalashnikovs of tomorrow.” -Open letter signed during Obama’s presidency in 2015.
Obama’s presidency also saw a 90% decrease of troops in war zones while increasing drone strikes by 10 fold. While a percentage of drone strikes today kill civilians and unidentified individuals, AI could help more accurately recognize and kill combatants rather than innocent people. Pilots could also command their own fleet of drones that could carry weapons, keep humans safe, and test the air defense.
Simulation testing in air force research labs showed that AI drones consistently beat human pilots to the point where a battle manager said he felt that the AI knew what he was thinking and what his intentions were. He was defeated by the AI every time.
Similarly, autonomous ships could find submarines and launch weapons. They would cost $20,000 a day to run, versus $700,000 a day for a manned ship.
As of now, 93% of nuclear weapons are owned by the US And Russia. This is because while the blueprints for building a nuclear arsenal might be easy enough to procure, nuclear weapons still require resources and money to build. One of the main concerns of military AI is how accessible it is. Code can spread around easily and for free. How do we keep this information away from black market patrons such as terrorists?
The US and Russia won’t be the only contenders in the AI race; China has developed its own plan called the Next Generation AI Development Plan. It includes being on par with the US by 2020, having major breakthroughs by 2025, and “occupying the commanding heights” of AI by 2030. While China has been known to imitate rather than innovate in terms of its technology, they do have a couple of advantages on their side. Their huge population makes it easy to track people and collect their information for AI programs to use. Their government can also force private companies to work with them unlike in the US where private firms are shown to be far ahead of the military.
The US Defense Science Board has called to accelerate military AI development while the Department of Defense has enabled a policy requiring humans to be involved in any decisions of lethal force.
In my article on The Truth About Artificial Intelligence, I discuss why military funding of AI is worrisome. If the primary function of these machines is to kill, what will happen if they one day decide to turn on us? It is better to have civilians funding AI to ensure that they’re created to help mankind, not simply to be killing machines.
But, as always, new weapons are irresistible.
As these AI weapons continue to develop they will be able to make decisions more quickly than we can, analyzing larger sets of information than our brains can handle. Drones will have bird-like accuracy, software will generate realistic fake videos, attacks would be met with quick reactions rather than the hesitation that humans have in making decisions during battle.
Self defense tactics such as counterattacks could be entrusted fully to AI systems, especially on ships and at the weapon facilities at the country’s borders.
AI, like nuclear power, could also be a deterrent against attacks since engaging one country’s AI with another could have catastrophic outcomes. The question for now might not be “should we create autonomous weapons” anymore — since we know they might be inevitable — but perhaps what we should be asking now is “what measures can we put into place to make sure this is kept under control and used for good and not global destruction?”