The Loss of Control in Autonomous Vehicles

By surrendering the wheel, do we surrender ourselves?

Emma Yu
Design Ethics
7 min readJun 3, 2024

--

A Waymo vehicle on public roads in 2022, photo by Justin Sullivan/Getty Images

Motor vehicle crashes are one of the leading causes of injury-related deaths, killing over 100 people per day. In the United States, 94 percent of crashes are caused by human error. Autonomous vehicles, or self-driving cars, have increasingly progressed into reality in the last century, propelled by technological advances and the need to address road safety concerns. Yet despite their potential to save lives, the development of autonomous vehicles raises several concerns that need to be addressed.

The key characteristic of an autonomous vehicle is that the vehicle is the one “making decisions” in the moment while on the road, with no human intervention. Our lives are in the hands of the vehicle’s programming, posing ethical challenges related to our autonomy, especially in the dangerous situations leading up to car crashes.

I argue that autonomous vehicles force us to surrender the fundamental human experience of autonomy, raising ethical concerns about our sense of free will and the disconnect between action, consequence, and responsibility in life-or-death situations.

Defining autonomous vehicles

The Society of Automotive Engineers (SAE) defines six levels of driving automation in their classification system:

Diagram of SAE’s levels of autonomy

No vehicles are currently classified as Level 5. Tesla’s Autopilot technology is classified as Level 2 and requires a driver’s attention at all times. Level 4 robotaxis from Alphabet Waymo, General Motors, and Cruise drive autonomously around major cities including San Francisco, Los Angeles, and Phoenix. These autonomous vehicles are not currently available for consumers to purchase for individual ownership and are limited to commercial use in certain settings (i.e., ride-share services in select cities), research, and testing. As the development of Level 4 and 5 autonomous vehicles grows more prevalent, it is important to address their ethical concerns, specifically in regards to surrendering human autonomy.

Autonomy and the human experience

Autonomy is a fundamental part of the human experience. It is the ability to be free and shape our own lives. Our decisions may have their flaws and lead to unintended consequences, but this risk is part of being human. Taking ownership of our successes and failures is also how we grow as individuals and learn how to navigate through life.

There are many concerns about the growing use of “smart technology” and artificial intelligence for the future of humans. Relying on these technologies can lead to the loss of human agency and our ability to think for ourselves. Although they can “make our lives easier” by performing actions for us, it comes at the cost of diminishing our independence and individual power. Additionally, the complexity of this technology creates a knowledge gap, leaving us to sacrifice our independence, privacy, and power to a system that we are blind to.

Giving up our autonomy to a machine would mean losing part of what it means to be human — but that’s exactly what would happen with the widespread adoption of autonomous vehicles.

The loss of control

When using autonomous vehicles, we lose our control and free will because we sacrifice our ability to make decisions. This situation becomes especially alarming during the unpredictable moments of decision-making and human reaction leading up to a crash. In the case of life-or-death situations, we surrender our own human autonomy and give it up to a machine and its creators.

An autonomous vehicle relies on several technologies to collect data about its surrounding environment, which are sent to AI-based algorithms to inform their next decision. As human drivers, we rely on our own human instinct and reaction. With an autonomous vehicle, the decision is made for us by the vehicle’s algorithm — and that vehicle’s algorithm was created by someone else. We surrender our own control and values, particularly in the moments before a crash that can determine life or death to us or those around us. What happens next in this situation is now up to an algorithm and what was predetermined by its designers, embedded with values and decisions that are not ours. This also poses further challenges with the disconnect between action, consequence, and responsibility.

Who is responsible?

There are several thought experiments that put the idea of autonomous vehicle ethics into question. Should a vehicle save as many lives as possible, or should its first priority be to protect the passenger at all costs? If the vehicle should prioritize its own passengers, the autonomous vehicle could break traffic laws and swerve into another lane, if that is the safest calculated decision. But this could also mean crashing into and gravely injuring or killing drivers and passengers in nearby vehicles — including those more vulnerable on motorcycles and bicycles — who were following the law. Whether or not you as a driver would have made this decision in this situation does not matter, because the vehicle made it for you. Thus, we see a disconnect between what happened between the driver, vehicle, and their surroundings, what resulted, and who or what bears responsibility.

So how do we regulate this with a machine that predetermines these life-altering decisions? We as a society hold a social contract that states that every driver on the road, including ourselves, has a driver’s license and will face consequences by law if necessary. Our legal system is built on the understanding that these accidents are caused by human behavior and reaction.

The way we place blame and responsibility on individuals comes from the understanding that we as humans have free will and are the ones making decisions.

A person who is at fault can be held accountable for their decisions and have the opportunity to grow and learn from their mistakes. On the other hand, holding a piece of automated technology accountable is much more difficult, especially when it comes to programming a vehicle that could intentionally break traffic laws when handling life or death situations. Any decision would still be one made by the vehicle’s algorithm and its designers, and not by us as individual drivers. If we place responsibility on the creators of the autonomous vehicle, who at the company gets blamed? This dilemma is only exacerbated when we think about what happens by implementing more “autonomous” technologies into our lives. What happens if and when aspects of healthcare and education systems are automated? These circumstances can lead to significant complications and crises as we question the fundamentals of the institutions and systems we live by.

What about the positives?

Despite these concerns, autonomous vehicles do have incredible potential as a solution to eliminate human error in driving and save lives on the road. Autonomous vehicles have significantly faster reaction times than human drivers and less frequent collisions than regular vehicles. They can also increase accessibility to those who are unable to drive, including the elderly and others with limited mobility. Additionally, they can increase efficiency for passengers and the public: passengers can increase productivity by working, relaxing, or engaging in other activities while being driven, maximizing travel time, as the vehicle optimizes the most efficient route to the desired destination.

While these are all great benefits, we must be cautious as we move forward with autonomous vehicles. For instance, autonomous vehicles can still make mistakes or miscalculate the actions of pedestrians or other human drivers on the road. These vehicles have driven into concrete, caused traffic jams, blocked the path of ambulances, and dragged a pedestrian twenty feet when trying to pull over to the curb.

These incidents raise concerns about the risks of the unintended consequences of autonomous vehicles despite their potential to reduce traffic accidents and save lives.

Even if the vehicles function perfectly as intended, we still need to be careful about the cost of what we are doing by implementing autonomous vehicles and other automated technologies. They are here to stay, but we must be cautious about how we proceed and make decisions handling accountability in the structures that hold our society together. If not, we risk the framework of our institutions and systems falling apart.

Autonomy has immeasurable value to human identity. It empowers us to take control of our life, explore freely, and grow into our unique individual selves. It is also fundamental to the way we understand systems in our society. Using autonomous vehicles takes that away from us and raises questions about responsibility in the structures ingrained in our society. Looking to the future, incorporating more autonomous technologies into our everyday lives could exacerbate this problem. We lose what it means to be human by relying on a machine, rather than taking the risk of making our own choices. Moving forward, we must be sure to preserve parts of our autonomy. Our decisions in handling this process shape the world we will live in and its influence on humanity.

If our future is a world where we have to give up our autonomy and freedom… is that a world worth living in?

--

--