Considering moral agency in artificial intelligence and autonomous vehicles


Edited by Waverley He.

KITT: Since you haven’t asked, it was demeaning; demoralizing. 
Michael: What? 
KITT: Police impound.”
- Knight Rider, Episode 1.22: Short Notice

Nowadays, the possibilities — and dangers — of artificial intelligence have saturated the writings of mainstream media tech journalists.

Take Monday’s Google News page, for example. In the span of just five days, over 150 articles were written about the now-notorious chat-bot dialogue recorded by researchers at Facebook’s Artificial Intelligence Research Lab (FAIR):

Notably, Gizmodo is one of the only top sources that is not written to incite panic. Good on you, Gizmodo.

To some degree, the sense of alarm generated by these articles isn’t totally unwarranted. While the capabilities of and uses for artificial intelligence (AI) have exponentially increased over the past several years, the vast potential of AI still remains largely unexplored. And if you find yourself concerned by our ostensible inability to manage this innovation, don’t worry — you aren’t alone. A good number of industry leaders, including Stephen Hawking and Elon Musk, have cautioned against the rapid speed at which we are developing AI technology. Musk has even criticized Mark Zuckerberg for not taking the threat posed by artificial intelligence seriously enough.

However, the nuances of the debate over artificial intelligence are often misconstrued and inaccurately pigeonholed in popular media. Although it is hard to grasp how radically AI technology might change our ways of living, the scope of these discussions should definitely extend beyond the slight risk that nascent chatbots are conspiring to destroy humanity. Have we considered the dangers of Facebook creating pattern recognition tools for mental health, when this technology might produce too many false positives? What about the production of self-driving vehicles, which may potentially lead to the displacement of freight truck drivers from their jobs? Clearly, these issues deviate in both ethics and impact.

I believe we produce different “gut feelings” about whether we are okay with introducing artificial intelligence to different scenarios. Moreover, I believe that these instincts — and our broader concerns with this technology — stem from whether we think specific AI algorithms make autonomous, intentional decisions that they can be held responsible for. In other words, the underlying cause of our perceptions of AI (both positive and negative) is our consideration of moral agency.

Let’s explore this idea in depth with a popular example of artificial intelligence that challenges these gut feelings: autonomous vehicles.

Self-driving trolleys: an overview

The classic moral dilemma: the trolley problem.

You’ve probably heard of the infamous “trolley problem”. A trolley is barreling down a track, which branches up ahead. Five people are tied to the first branch, while one is tied to the second. In your hand is a lever that determines the direction of the trolley. If you choose not to do anything, the trolley will continue along the first branch and kill five; pulling the lever, however, will divert the trolley toward the second path and kill just the one.

It’s easy enough to see how this problem would stump a human. We can also imagine that two different people might answer this question differently. One person might prioritize minimizing the number of lives lost, while another might be unwilling to actively cause someone’s death. How, then, can we expect an algorithm to make a decision that we may not even trust ourselves to make?

When it comes to self-driving cars, the dilemma seems to map on perfectly— an autonomous vehicle may one day have to ‘choose’ between minimizing total deaths or protecting its passengers. Plenty of manufacturers and ethicists have weighed in on this trolley-problem-style debate. One Mercedes-Benz representative asserted the company would prioritize the lives of the passengers in such a scenario, although Mercedes-Benz quickly retracted his statement, stating that “neither programmers nor automated systems are entitled to weigh the value of human lives.”

Regulating autonomous vehicles

The law has offered conflicting insights on how to regulate and address this problem. In February 2016, the National Highway Traffic Safety Administration (NHTSA) seemed to entertain the idea that the navigation system of Google’s self-driving cars is equivalent to a human “driver”. In a letter to Google’s Director of the Self-Driving Car Project, the NHTSA states that:

“NHTSA will interpret ‘driver’ in the context of Google’s described motor vehicle design as referring to the SDS [self-driving-system], and not to any of the vehicle occupants […] even if it were possible for a human occupant to determine the location of Google’s steering control system, and sit ‘immediately behind’ it, that human occupant would not be capable of actually driving the vehicle as described by Google. If no human occupant of the vehicle can actually drive the vehicle, it is more reasonable to identify the ‘driver’ as whatever (as opposed to whoever) is doing the driving. In this instance, an item of motor vehicle equipment, the SDS, is actually driving the vehicle.”

To many, classifying the SDS as the driver is significant because it suggests that the autonomous vehicle has equivalent agency to a human being (at least from a liability perspective).

Google’s self-driving car does not require, or allow, human intervention.

However, in September 2016, the NHTSA also disseminated a fifteen-point checklist for safety expectations regarding semiautonomous and driverless cars. On pages 26 and 27, the checklist states that:

“Since these decisions potentially impact not only the automated vehicle and its occupants but also surrounding road users, the resolution to these conflicts should be broadly acceptable […] Algorithms for resolving these conflict situations should be developed transparently using input from Federal and State regulators, drivers, passengers and vulnerable road users, and taking into account the consequences of an HAV’s actions on others.”

Germany’s minister of transport even offered three rules for autonomous vehicles that could be used to create such algorithms to resolve conflict situations (referencing Asimov’s laws of robotics):

1. It is clear that property damage takes always precedence of personal injury.
2. There must be no classification of people, for example, on the size, age and the like.
3. If something happens, the manufacturer is liable.

In other words, there are two conflicting implications that contribute to the public misunderstanding of the trolley problem. The first — stemming from the original NHTSA statement — is that certain self-driving cars (i.e. Google’s) are capable of being responsible for their own decisions. Because these cars are unable to be manually operated, they assume pseudo-human responsibility in the eyes of the law.

The second implication—stemming from the second NHTSA checklist and from Germany’s three rules — is that it is possible for carmakers and programmers to control the decisions of autonomous vehicles. The directive for manufacturers to develop algorithms that output “broadly acceptable” solutions to ethical conflicts implies that such a task can actually be accomplished.

This leaves us at a crossroads. Do we accept that a) these vehicles are able to make fully autonomous decisions without human input, or that b) all outputs produced by such vehicles are human-influenced?

Training decision-making

To evaluate these two competing claims, let’s examine the methodology underlying decision-making in self-driving cars.

Google’s autonomous navigation system operates by using LIDAR (light-sensing radar), a technology that emits lasers which detect the external environment and analyze the position of the car in relation to surrounding objects. This process is entirely computational, and at first glance may seem to suggest that self-driving cars are able to calculate novel decisions based on the environment that they sense.

However, this decision-making is heavily dependent upon annotated datasets used to train these navigation systems. These include “highly detailed, three-dimensional, computerized maps — which pinpoint a car’s location and understand its surroundings.” These digital representations are produced by the meticulous work of researchers, who tag and label each object in the environment in order to help the car optimize its decisions.

A visualization of how Google’s self-driving car uses an intricately encoded map to navigate the street shown in the bottom left.

In other words, such vehicles are highly reliant on supervised learning — a machine learning model where algorithms are provided with a training dataset of different inputs (e.g. cars, curbs, other objects) and their corresponding outputs (e.g. what happens when a car hits or doesn’t hit them, how much room a car has to drive by). Through many trials, these algorithms are trained to produce ‘correct’ values as determined by researchers (e.g. driving without causing accidents or injuries).

The use of complex, human-annotated training datasets is clear not only in general urban navigation, but also in other scenarios; for example, some navigation systems have ‘learned’ parking techniques from “observing a human drive the car in a parking lot between various starting points and destinations”. Simply put, the self-driving cars we’ve seen are only able to make ‘autonomous’ decisions within the constraints of a framework created for them by researchers.

Moral agency: autonomy, intentionality, and responsibility

A moral agent is a being who acts with internal notions of right and wrong and can be held accountable for their own actions. In thinking about the trolley problem, the lever-puller must always be someone who can act with moral agency; if they are unable to understand the motivations behind or consequences of their actions, then they are incapable of facing the internal ethical dilemma associated with their decision, regardless of what they choose.

John P. Sullins establishes three conditions in “When Is a Robot a Moral Agent?” with which to evaluate the moral agency of any robot:

1. autonomy is “simply that the machine is not under the direct control of any other agent or user” (28);
2. intentionality is derived from a “predisposition or ‘intention’ to do good or harm” (ibid);
3. and responsibility is understood as its ability to comprehend its larger role within a schema, as with a caretaker robot and its duty within the healthcare system (29).

To address each of Sullin’s prerequisites to moral agency in the context of autonomous vehicles:

  1. As originally stated by the NHTSA, Google’s self-driving cars are autonomous ‘drivers’ because human passengers cannot alter their actions.
  2. However, given that self-driving algorithms are trained through human-influenced supervised learning, they are not themselves predisposed to do either good or harm. In other words, they lack intentionality. Their incentive to perform a certain action is driven by optimization, not by a conscious understanding of the “good” or the “bad”.
  3. The 3-D, computerized environment that is mapped in training datasets is only an approximation of the real world. By selectively annotating objects and choosing to ignore others, humans provide self-driving algorithms with a limited representation of reality. While autonomous vehicles can localize themselves within the context of these sparse digital environments, they are unable to comprehend their position in the real world—the appropriate schema — for themselves. Thus, such vehicles are not responsible.
Another visualization of Google’s self-driving car environment, and what it “sees” at a given moment.

Because autonomous vehicles lack both conscious intentionality and responsibility, they cannot have moral agency. It is therefore more appropriate to attribute any ethical decisions to the manufacturers, rather than to the robots themselves.

Ascribing moral agency to these vehicles — or to any artificial intelligence technology, such as Facebook’s chatbots — would lead us to believe that these algorithms are more “intelligent” than they actually are. By equating Google’s self-driving car technology to a legal “driver”, the NHTSA largely ignores the extent of human influence on the car’s decision. Researchers provide the labeled maps and input-output frameworks with which these vehicles optimize their driving, whether in a city street or in a parking lot. Moreover, although programmers might try to avoid explicitly encoding ethical values or preferences altogether, an autonomous vehicle could reflect conflict resolution rules the manufacturer believes and intends implicitly.

Therefore, if we’re going to use the trolley problem to model self-driving cars and their choices, it’s more correct to think of them as levers than lever-pullers due to their lack of moral agency and conscious decision-making. The autonomous vehicles we consider are indeed capable of making independent decisions, but we shouldn’t forget the impact that their producers have had on such outcomes. While manufacturers do not directly make decisions in all conflict situations, they are ultimately the real lever-pullers — they train the vehicles with datasets that contain ‘optimal’ outcomes. Manufacturers should thus be held responsible for the decisions that autonomous vehicles make.

Reframing the trolley problem

In short,

  1. Google’s self-driving cars we have so far discussed cannot be equated to human “drivers”, contrary to what the NHTSA claims, because they lack moral agency. While they are sufficiently autonomous because they don’t have steering apparatus, they lack conscious intent independent from that of their manufacturers and cannot take responsibility for their actions. John P. Sullins provides the frameworks for analyzing this point.
  2. Because autonomous vehicles lack moral agency, it’s inappropriate to identify them as the lever-pullers in the context of the trolley problem. Instead, it’s more appropriate to think of them as the levers that manufacturers pull — the instruments programmers act on, whether implicitly or explicitly.

In any case, it will be rare that autonomous vehicles encounter scenarios identical to the trolley problem. Instead, the most common choices they (and their manufacturers) face are most likely about how to best mitigate injury, not necessarily to prevent death. The general solution may simply be to “slam on the brakes”, which provides the car with the best control given an emergency. As one engineer puts it,

Slowing down allows you [the car] to be “much more confident about things directly in front of you, just because of how the system works, but also your control is much more precise by slamming on the brakes than trying to swerve into anything.”

Maybe the trolley problem—which often paints a picture of life-or-death decision-making— is simply an imperfect and incomplete model for thinking about conflict resolution ethics, regardless of whether we believe the lever-puller is the autonomous vehicle or the manufacturer. Given that 20–50 million are injured or disabled by road crashes around the world every year, perhaps we should move on from the trolley problem and instead think about best practices for preventing non-fatal accidents.

Liability as an extension of moral agency

Developing best practices for conflict resolutions, both non-fatal and fatal, will always necessitate that we consider liability. And given that we attribute legal responsibility to the manufacturer for autonomous vehicles, we must ask several important questions. If certain states or countries require that all autonomous vehicles have a licensed driver and steering wheels (as previous Calfornia draft law did), how does this muddy the grounds for considering liability? On the other hand, given Google’s belief that having less human operability reduces accidents, are manufacturers comfortable with assuming all liability? How will they tackle the challenge of allowing the algorithm to adequately respond to real-time emergencies like low tire pressure, without the intervention of its passengers? What legally constitutes this ‘adequacy’?

It will be thus increasingly important to consider liability as an extension of moral agency in evaluating future threats or concerns in AI development. As the technology advances, there will inevitably continue to be misinformed takes on its dangers — and subsequently on assessments of liability — because of our instincts about how an actual moral agent should be held responsible in such a scenario.

We should make sure that we actively and critically challenge these gut feelings. Yes, it is unsettling that as potential passengers, we may have to relinquish virtually all control to our vehicles, save for maybe a single “panic button” e-brake. However, our unease being at the mercy of our cars doesn’t mean that the autonomous vehicles possess moral responsibility. They can’t make moral decisions, nor can we train them to. For now, they simply reflect the moral frameworks that their creators favor — they act without conscious intent.

Over a year ago, Joshua Brown of Canton, Ohio died in a fatal Tesla S accident. Out of the thirty-seven minutes he had his car on auto-pilot, he placed his hands on the steering wheel for only twenty-five seconds, in spite of “seven separate visual warnings from the system”. Before his death, the buzz over Tesla’s nascent level two autonomous driving system was high, and consumers had already posted several videos online where they would pose with hands off the steering wheel — including one where the sole passenger was in the back seat, away from the driver’s seat entirely.

For the purposes of this blog post, I only consider levels three and four autonomy for self-driving vehicles (out of the NHTSA’s five broad levels), which eliminate human intervention in specific or all environments respectively. But as lower level autonomy — like level two, which allows for human intervention in some situations — becomes more advanced, it will be more important to make sure consumers completely comprehend the technology. Manufacturers have a responsibility to ensure this understanding and to take on complete legal liability in the case of level three and four vehicles.

It’s dangerous to treat AI as something that it isn’t, or to over-estimate its abilities. To misrepresent it is not only erroneous, but also potentially terrible in terms of consequences. Doing so leads to misunderstandings of the limitations of technology, as well as an over-estimation of how safe the car can keep a reckless driver. It is not artificial intelligence by-and-large that we should be wary of, but rather the motivations and considerations with which programmers create such algorithms.