Taking AI Governance Seriously

European Commission report sets out legal framework for AIs and DAOs

Greg McMullen
IPDB Blog
6 min readAug 25, 2016

--

Image: AlejandroLinaresGarcia — Toy robots on display at the Museo del Objeto del Objeto in Mexico City, CC BY-SA 3.0,

For years, science fiction writers, transhumanist philosophers, economists, and countless tech barons (see: Bezos, Gates, Musk, Nadella) have warned of the societal implications of artificial intelligence. They’re not necessarily worried about Skynet. They’re worried about practical, day-to-day AIs—automated manufacturing, self-driving vehicles, delivery by drone, and so on—and how to integrate them into society in a way that is good for everyone.

In June, the European Commission’s Committee on Legal Affairs released the Report, or a “Draft Report with recommendations to the Commission on Civil Law Rules on Robotics”.

The Report earned a number of “lol, Europe” news stories from American tech outlets about how the EC was about to classify robots as “electronic persons” and require pension payments for robot workers. The general consensus was the proposals are “batshit crazy”, another example of regulatory overreach by the European Commission, and a threat to European innovation in robotics.

Jokes and policy preferences aside, however, the Report is a big deal. It’s a serious regulatory body taking the AI governance seriously. It provides a model for policies to address some of the concerns around AI, and provides a path toward a legal basis for decentralized autonomous organizations (DAOs) powered by AIs.

Let’s take a look at some of the biggest recommendations from the Report, and why its framing of AI governance matters.

Robots are people too

Personhood is key to the ability to engage with law. It grants the ability to enter into contracts and own property. Persons have certain rights in international and domestic law. Persons also have responsibilities under the law, and can be held liable for a breach of those duties.

This can be an individual human (a natural person) with the individual capacity to engage with law. We have also created legal fictions of personhood (legal person) to let other entities engage with law. For example, a corporation is not a “person” but we agree to treat it like one.

But what about robots or AIs? They could be owned by a natural or legal person, but this doesn’t fully account for the autonomy AIs may have. They don’t fit in either category.

For governance to be effective, it needs to have something to hold on to. AIs need personhood. The Report starts us down the path by understanding that we need a new category of personhood to properly account for the unique issues of AIs: the “electronic person”.

[…] whether they should be regarded as natural persons, legal persons, animals or objects — or whether a new category should be created, with its own specific features and implications as regards the attribution of rights and duties, including liability for damage […]

The report draws out a number of interesting frameworks for handling the rights and responsibilities of AIs or electronic persons. We’ll look at two: 1) liability, and 2) intellectual property.

1) AIs and liability

Persons can be held liable for their own action or inaction in certain circumstances.

Persons can also be responsible for the objects or animals under their control, but that responsibility diminishes the less control the person has or the less predictable an outcome.

But what about AIs, which have a range of risk and autonomy that do not fit neatly into existing laws? There are at least three categories of AIs to consider:

1. No autonomy. The AI is barely worthy of the name. It’s a simple tool that performs a single task, usually in a controlled space and according to pre-defined instructions. Responses to situations can be accurately predicted. Examples: Roombas, industrial robots.

This category is easy to deal with under existing law. Although they can be powerful tools, AIs in this category are clearly under the operation of a human. If you set one of these AIs in motion, you could predict the kinds of harm it could cause pretty easily, and it is not unreasonable to hold the owner liable for predictable harms.

2. Some autonomy. The AI is a more sophisticated tool that attempts to achieve a pre-defined goal using pre-defined parameters, but that sometimes has unexpected results. Examples: Self-driving cars, Deep Dream image generation.

My Roomba finds an endless variety of things to get caught on and I can never predict all of them, but I have a general awareness of the kinds of trouble it can cause. It has never brandished a knife or found its way outside to threaten my neighbours. With a dose of autonomy, this could change.

This category of AI starts to blur the lines of legal liability. The AI may have to make life-or-death choices (Trolley Problem game, anyone?), and the solutions it comes up with may not be the ones we like.

3. Full autonomy. The robot or AI makes decisions on its own to achieve objectives defined by a human (or by itself!) but without specific instructions on how to achieve those objectives. It often operates in unexpected ways. Examples: C3P0, HAL-9000.

2001: A Space Odyssey. Source: GIPHY.

Full autonomy is perhaps the biggest problem category. In the words of the Report:

[T]he more autonomous robots are, the less they can be considered simple tools in the hands of other actors[.]

The Report offers a common sense solution for dividing responsibility among the humans involved in creating the robot or AI:

[L]iability would be proportionate to the actual level of instructions given to the robot and of its autonomy, so that the greater a robot’s learning capability or autonomy is, the lower other parties’ responsibility should be liability would be proportionate to the actual level of instructions given to the robot and of its autonomy, so that the greater a robot’s learning capability or autonomy is, the lower other parties’ responsibility should be[.]

The framework provided by the Report is even more nuanced than it looks at first glance. This framework gives room for multiple parties’ involvement. For example, a self-driving car could be owned by an individual, configured by a dealership, assembled by a manufacturer, and powered by an AI-as-a-Service provider. Who is responsible for the decisions made by the AI?

This isn’t a determinative answer, but it is a framework for lawmakers and judges as we move into a world where these questions are no longer hypothetical.

2) Artificial intelligence meets intellectual property

The infamous “Monkey Selfie” legal battle captured headlines again earlier this year, when a U.S. Federal Court judge ruled that the monkey in question could not have copyright in his self-portrait since he was not a “person” as required by the law, even though he was the creator of the image. Courts had previously held that the photographer whose camera the monkey used could not own the photo, since it was the monkey’s creation.

Image: Public domain.

This same question will come up with artificial intelligences. We are seeing AIs generating music, visual art, video game content, and even films. It will only be a matter of time before there is an intellectual property dispute over work created by an AI.

While it makes no specific recommendation, the Report does highlight the need for legislation. It:

calls on the Commission to elaborate criteria for an ‘own intellectual creation’ for copyrightable works produced by computers or robots;

This is a major step. It will give clarity to human creators who are increasingly relying on AI in their creative works. It will also set a framework for protecting creations by AIs that act more autonomously. It could even provide the legal grounding necessary for an “ArtDAO” to take the art world by storm.

Conclusions

While it’s easy to laugh about regulating “electronic persons” today, the Report is asking the hard questions we will need to ask in the next ten years. AIs are too powerful to live entirely outside of the law. It’s time to consider how AI governance will work. The Report is a good first step.

--

--