Source: iStockPhotos

How Artificial Intelligence Could Disrupt Alliances

Carnegie Europe
5 min readAug 31, 2017

Much has been written about how artificial intelligence will revolutionize wars. What will it do to international organizations that manage them?

by Tomáš Valášek | August 31, 2017

Western democracies rarely fight wars alone. They do so in alliances or coalitions because the more countries that take part, the more legitimate the reasons for fighting the war appear, and the lead nations get to share the human and financial costs. But a clever enemy will recognize that alliances are delicate. They require trust, constant consultation, and efficient dispute resolution to work properly. In addition to attacking troops in the field, a smart adversary will also seek to subvert the organization itself: to sow discord and gum up decisionmaking. With a bit of skill and luck, this will cause the alliance to miss opportunities to gain military advantage in the field, or lose the will to fight altogether.

Artificial intelligence (AI) could soon make it easier for adversaries to divide and dishearten alliances. The moment may not be far off.

Artificial intelligence (AI) could soon make it easier for adversaries to divide and dishearten alliances. The moment may not be far off. While artificial general intelligence — capable of simulating the human mind in all its complexity — is making huge, if not controversial strides, it is still decades away. But more limited AI applications are already being put to use. Machines are able to automatically learn and improve from experience without being explicitly programmed for each adaptation. This usually works within a narrow field (such as a game of chess), but machine learning is enabling dramatic improvements within a number of different lines of activity, with several possible implications for miliatary alliances.

For example, AI can be effectively deployed to undermine trust among countries fighting on the same side by discrediting their intelligence. Before and during an operation, alliance members routinely exchange video and audio records of the enemy’s actions to make the case for a particular strategy. But AI is poised to enable easy-to-produce, high-quality spoofs of audio and video. When a country uses an intercepted radio communication as evidence, an adversary could release a doctored version looking so credible and realistic that it undermines faith in the original intelligence. This tactic is not new but attempts to date have not worked well because — so far — audio and video fakes have been crude and relatively easy to debunk. Artificial intelligence could make it far more difficult to tell truth from fiction.

Artificial intelligence could make it far more difficult to tell truth from fiction.

The same technology could be used to slow decisionmaking. Imagine a realistic-looking video emerging that seemingly shows one of the political leaders of a military alliance questioning the rationale for the war or criticizing fellow allies. Eventually, it will be proven fake. But if the release is well timed — for example, to coincide with enemy advances on the ground — the video could confuse allies long enough for the adversary to make irreversible gains.

AI will confer new advantages on attackers trying to penetrate the coding and the networks by greatly speeding up their ability to probe for weaknesses.

AI also presents new means for skilled attackers to infiltrate networks and subvert network defenses. When alliances fight wars, they exchange on a daily basis an enormous amount of information — intelligence, analysis, requests for instructions — between civilian headquarters, military commands, and capitals. Most of it is encrypted, to prevent enemies from accessing that information and preempt allied action. But AI will confer new advantages on attackers trying to penetrate the coding and the networks by greatly speeding up their ability to probe for weaknesses. The same technology that allows for more realistic audio and video spoofs will also enable more sophisticated social engineering attacks, for example, via credibly simulating the supervisor’s voice, or helping to generate realistic spoof emails using the style and diction of a person known to the victim.

In the past, organizations such as NATO and the EU have adapted to counter less sophisticated attempts at dividing them, for example by setting up a unit to fight Russian propaganda. Another rethink may be needed to counter more effective attacks. NATO and the EU may need to insist on a higher standard for protecting national classified communication links and use the threat of withdrawing certification to compel member states to up their game. Standards for weapons systems may need to be upgraded to make sure the next generation is resistant to AI-enabled cyberattacks. Many more lines of work than the ones highlighted above could be affected by integration of artificial intelligence. A root-and-branch review of AI’s impact would help identify areas where mitigation is most urgently needed.

Artificial intelligence itself will be part of the answer: for example, it can help defenders identify and patch software vulnerabilities ahead of the attack. AI can also help identify and prevent uploads of propaganda videos.

But the deployment of AI by allies carries its own political risk. While its use to defend networks or information integrity is widely accepted, AI’s military applications will introduce new tensions to alliances. The sense of equality and codecision among members could be at risk because of worries about accountability.

When countries fight as a group they want to have a say in how that alliance prosecutes the war. But that becomes impossible if the fighting in the future is done by machines. While for now the United States has a policy of keeping a human “in the loop” on decisions to use lethal force, military tactics and technology keep evolving. As artificial intelligence becomes capable of tackling more complex tasks, the killing in the future may not be done by a single missile-armed unmanned aerial vehicle (UAV) but, in the medium term, by swarms of UAVs that use AI to constantly adopt tactics and targeting in real time, leaving little time and space for human interference. This development might make allies more reluctant to join the fight in the first place. They would worry that if AI-directed weapons kill innocent civilians by mistake or inflict disproportional carnage, governments will be blamed despite having no control over the action itself.

It is time to put AI more prominently on NATO’s and the EU’s agenda.

This dilemma is not new. Even today, capitals delegate certain decisions to commanders and assume the political risk if an operation goes badly. But in such cases, responsibility can be drawn after the fact and the guilty can be punished — there is no such recourse with AI. Also, while publics understand and make allowances for human fallibility, they feel uncomfortable about machine-made mistakes. This puts democracies at a distinct disadvantage. Undemocratic governments that are unconcerned about public reaction will have fewer qualms about removing humans from the loop. This strengthens the case for a broad international agreement on offensive military uses of AI, to reassure potentially anxious publics and, ideally, prevent the most egregious applications of artificial intelligence in warfare. In short, it is time to put AI more prominently on NATO’s and the EU’s agenda.

This article was originally published on Judy Dempsey’s Strategic Europe blog. Sign up to receive updates in your inbox!

--

--

Carnegie Europe

Carnegie Europe is the go-to source of European foreign policy analysis. Part of the @CarnegieEndow, the first global think tank.