Responsible AI in a national defence context

Supergovernance
8 min readAug 16, 2019

--

NOTE: These are my public remarks that I made at the CIFAR-Amii-UCLA Summer Institute on AI and Society on 21 July 2019. I’ve changed as little as possible from my speaking points. They represent my perspectives only, and not those of the Department of National Defence or the Canadian Armed Forces.

Source: Canadian Forces Combat Camera

What is the geopolitical strategy of a medium power in 2019? A small power? How do these states project power and interests in the contemporary age? What even defines power anymore?

Typically these rudimentary, Cold War classifications of states focused on their abilities to generate and sustain hard power and the ability to project it either directly or through proxies. The cyber domain has started to redefine a security paradigm by making impactful force relatively inexpensive. AI, with its free tools and readily available state-owned and public data, can allow smaller powers to leverage data to stategic and operational advantage like never before.

Maybe. Even if so, it’s not going to be easy. Let’s discuss why.

Taking a step back, AI is being used or considered by most developed states in the aim of achieving some economic or geopolitical advantage in every line of business, from improving government services to pure research to public safety. As states are big data holders, the ability to make sense of that data can unlock serious potential.

Among all functions of state, militaries provide some of the richest use cases for AI because of the depth and diversity of data sets. Personnel, materiel, environment, etc contribute to significant data sources for mining. In 2019, large military assets such as ships or planes should be seen as floating or flying data platforms, potentially adding secondary capabilities that extend beyond what their physical form is designed to do. Historically, militaries have not been organized data managers, and for both security and cultural reasons have been slow to exploit analytics. Over the last five years, this has started to change.

“[AI] comes with colossal opportunities, but also threats that are difficult to predict.

Whoever becomes the leader in this sphere will become the ruler of the world.”

Vladimir Putin, 2017

Now that quote looks foreboding, doesn’t it? There does seem to be some discussion of an arms race in the air, though I’m not sure of what exactly. AI is not a single application or weapon, it’s many things applied to a variety of uses of environments. Even the term itself is misapplied and widely misunderstood.

Let’s unpack this complexity, because I think the potential military applications of AI are often misunderstood. I have personally heard calls for bans on the use of military applications writ large or specific to weapons and I’m going to start by saying that this is both highly unlikely and probably undesirable.

AI and the mission

So the Canadian Armed Forces do a lot, across a multifaceted mission. Beyond the classic missions of continental defence and expeditionary operations, the CAF provides critical assistance with natural disasters such as this year’s forest fires and floods throughout the country. Beyond territorial patrols, the Royal Canadian Navy plays a diverse law enforcement role from drug and human trafficking interdiction to monitoring territorial waters for illegal fishing. It’s a very complex business that is not widely understood. (Addendum: you should get to know what the CAF is up to. You’ll be surprised. Check out the list of operations underway here.)

The CAF is also an enormous service provider, from health and dental, policing and courts, property services, chaplaincy, drivers licensing, etc. It acts at three levels of government to members, and all of these services produce and consume data.

When it comes to AI, within DND and CAF, we are looking at applications in four use categories:

  • Perception
  • Reasoning
  • Human-machine interaction and teaming
  • Autonomous (or semi-autonomous) platforms

This is meant to show a (very) basic conceptual model of all potential classes military AI applications. On the left you have the defence functions as defined by military doctrine. On the top you have the environments. On the right you have the use categories. In the next few years most militaries can probably design experiments in most, if not all, of the 120 category permutations displayed here. There will be more should space become a fully-operational environment.

Let’s look at some opportunities across the spectrum of defence functions. These are hypothetical, but all serious suggestions.

Act

  • Semi-autonomous weapon systems
  • Human-machine teaming

Shield

  • Autonomous robots for CBRN detection and disposal
  • Adaptive network defence

Sustain

  • Robotic field surgery
  • Autonomous convoy and resupply

Generate

  • Recruitment
  • Performance management and training

Sense

  • Object detection
  • Sensor network optimization

Command

  • Mission resource optimization
  • Risk assessment and option analysis

As you can see, they are diverse. Each of these bring their own nuances in law, ethics, and impact on society. Each would need careful control and consideration. But how do we do that? What is the process of consideration that we should follow?

Ethics: the fun part

Source: Canadian Forces Combat Camera

I’m going to set aside debates on the rights of machines or, per the rules of this conference, talk about general intelligence or superintelligence.

There are clear legal frameworks that the CAF have to follow, but as I mentioned this is highly dependent on the activities that they are undertaking. Legal regimes can include one or more of: armed conflict, human rights, privacy, administrative, and constitutional. There are various layers of regulations (e.g. the QR&O), operational policies, standards, and layers and layers of standing orders.

Then there are ethics, the vast area that exists between and around the rules. The Military Ethics Assessment Framework developed by Defence Research Development Canada was formulated to apply to a variety of advanced technologies but applies well here. Applying these is by no means straightforward, nor can it be.

  1. Compliance with DND and CAF Code of Values and Ethics
  2. Compliance with Jus ad Bellum Principles
  3. Compliance with Jus in Bello Principles
  4. Health and Safety Considerations
  5. Accountability and Liability Considerations
  6. Privacy, Confidentiality, and Security Considerations
  7. Equality Considerations
  8. Consent Considerations
  9. Humanity Considerations
  10. Reliability and Trust Considerations
  11. Effect on Society Considerations
  12. Considerations Regarding Preparedness for Adversaries

I don’t have time today to delve into all of them individually, so I’ll focus on a couple that might be different from the social concerns often raised in these fora and that I think will be discussed in day 2 of this event:

Jus ad bellum — I can address the Terminator in the room, but I’d like you to think more about WOPR than SkyNet. The most important ethical consideration of AI inserted in strategic decision-making is an algorithmically-driven nudge to engage in armed conflict where one otherwise wouldn’t. Ensuring safe use of this sort of system would require a bevy of specialized knowledge in cognitive science and data science to ensure that such recommendations could be interrogated. Generals and flag officers would need specialized training to know what they see from the system, and know what they aren’t seeing.

Jus in bello — That force employed remains discriminate, proportional, and accountable to lawful commanders. We should not be divorced from the implementation or consequences of the employment of force. Which leads to…

Humanity — that warfare remains a human endeavour. That we don’t unleash use of force without understanding the gravity of the action. A country’s decision to use force indicates a willingness to take a calculated risk with real people to further a political objective. Removing that risk by extensive use of robotics makes the force decision easier, and should it be? Then again, it is our duty to ensure the safety of uniformed personnel as much as possible. I would suggest that this calculation will be a difficult one to make for senior decision-makers.

Reliability and trust — that models work in the often variable and unforgiving landscapes where militaries operate. Between internet-deprived or -denied environments, rugged terrain, temperature extremes, or areas of tremendous complexity such as large urban areas, the world poses brutal design challenges for developers. Can a model trained solely in a desert environment be generalized to the next combat environment, which may be a jungle? Against a state versus non-state adversary? Can I trust that the training set has not been tampered with by an intruder?

Preparation for adversaries — The most important role for government is to keep its population safe. To do this, we must stay toe to toe with adversaries while ensuring that we maintain a focus on all of these other objectives. There is an opportunity to lose one’s values in this race, although this has been a trade-off that military commanders have been aware of for much of the history of armed conflict, and is not new to AI.

We have other, operational realities preventing widespread adoption of AI that are hardly unique to Canada. These should not be overlooked as real issues, and cannot be changed overnight. These include:

  • Maintaining functional interoperability within NATO.
  • Implementing data management and governance in arguably the most complex organization in government. While not the subject of today’s discussion, I can’t underscore enough the data management challenge posed by many of these capabilities!
  • Convincing a limited labour pool not only to work for the government, but possibly with the security restrictions that prevent them from openly displaying their work.

Finally, and I can’t overstate this, there is a real cost-benefit analysis that needs to be done with AI, and especially robotics, because the opportunity cost of certain actions can have deadly consequences down the road. With a relatively modest defence budget, can a medium or small power afford to invest in unproven robotics platforms to the detriment of a more understood capital purchase such as a ship or armoured vehicle? Or invest in model development that may fail, or require significant human curation of data to succeed? On the flip side, with free tools and existing data, can a military afford not to experiment in leveraging this suite of technologies, especially in the case of disembodied AI?

In my mind, the challenge facing the entire NATO alliance — and beyond — over the next couple of years will be to help establish operational standards and policies to ensure an informed, ethical deliberation occurs wherever necessary. Even beyond that, we need to be able to express these ethical imperatives into system requirements of models and platforms that we want built. We need to realize that to some extent, the employment of AI by militaries worldwide will be somewhat of an inevitability, so its not a matter of whether we go down that road, but how we do it justly.

--

--

Supergovernance

Hi! I’m Michael and I write about digital policy and government.