The Road to Autonomous Vehicles: Navigating Technical & Ethical Challenges

Introduction

In 2015, Baidu predicted that it would be selling autonomous cars by 2020. One year later, Business Insider forecasted that “10 million self-driving cars will be on the road by 2020,” while Lyft’s John Zimmer claimed that most rides would be carried out by driverless cars by 2021. And in 2019, Elon Musk tweeted “the color orange is named after the fruit.” That one isn’t related to self-driving cars, just a helpful reminder for all of us.

The point is, we’ve been expecting autonomous vehicles (AVs) to become commonplace for years. But our latest developments have fallen short of predictions — and many of us aren’t sure why. Moreover, we still haven’t resolved critical conversations about the societal implications of driverless cars.

In this article, we’ll consider the technical and ethical issues surrounding AVs to figure out why driverless cars aren’t ready to hit the streets just yet. Buckle up, because we’re about to go over the basics of autonomous technology, analyze the current state of AV research, and dive into some of the biggest societal concerns about self-driving vehicles.

Let’s start with the fundamentals: what even is an autonomous vehicle? An AV is any vehicle that’s capable of (1) sensing its environment and (2) moving safely with little or no human input. This includes self-driving cars, which we’ll spend most of this article talking about, but also drones, boats, and planes. Autonomous vehicles use a whole bunch of sensors to detect key features in their environment, ranging from cameras (pretty boring) to radar (more interesting, we like radio waves 🔊) to Lidar (like radar but with lasers — very very cool).

Levels of Automation

There are five levels of automation that tell us how equipped a car is to operate without a driver.

Level 0: No Automation

All operating tasks are performed by the driver. Today, most older cars (eg. a 2005 Honda Accord) and some newer cars (eg. a 2018 Jeep Wrangler) are still at this level. Level 0 cars might have some safety features like blind-spot monitoring and a backup camera, but the driver will always be in control of the car. So if you ignore the blind spot alert, your 2015 Mazda 3 will allow you to drive straight into your garage door instead of breaking automatically (this is a completely hypothetical situation that never happened).

Level 1: Driver Assistance

The driver handles all car operations but gets a little extra help from the car — this includes features like brake/acceleration support, lane centering or adaptive cruise control. A few boujee old cars and a lot of standard new cars have reached this level, like the 1999 Mercedes-Benz S-Class, the 2019 Kia Soul, and the 2020 Hyundai Sonata.

Level 2: Partial Automation

The driver can disengage from simple tasks like cruise control and lane centering. Today, only ridiculously expensive cars (i.e. new Tesla, Cadillac, and Mercedes cars 😭) have reached this level.

Level 3: Conditional Automation

Speaking of Tesla, they’re actually really close to reaching Level 3. Level 3 cars have the ability to take over in specific situations (eg. a traffic jam), but not in others.

Level 4: High Automation

This is what you probably think of when you picture a self-driving car. Level 4 cars can carry out all predictable operations, but still should have a driver at the wheel just in case.

Level 5: Full Automation

Cars can navigate around completely unpredictable situations and have no need for a human driver. We’re a long way from Level 5 — we still haven’t reached Level 4, and there are a lot of roadblocks we need to get past before we have the technology we need to achieve complete autonomy.

Ethical Issues

But developing AVs isn’t just a technical challenge — there are also a lot of ethical issues we need to discuss before self-driving cars become the norm.

Liability

Who should have legal responsibility in events involving cars with autonomous abilities? If there’s a crash, should we blame the driver? The car manufacturer? The company that created the autonomous technology? Should we trace it all the way back to the programmer? According to current laws, the driver is at fault if they did not take reasonable precautions and the manufacturer is at fault if they did not ensure the safety of reasonably expected use cases. But who’s to say which precautions and use cases are reasonable and which aren’t? There’s a lot of gray area in our legislation that could be exploited.

Decision-Making

While we’re on the subject of gray area, the snap decisions that we expect drivers to make are hardly ever black or white. To slightly plagiarize the infamous trolley problem, imagine that you’re driving through your neighborhood when you see a group of three pedestrians illegally crossing the street right in front of you. You’re too close to stop, but you can swerve onto the sidewalk, in which case you’ll hit a single pedestrian waiting for their turn to legally cross. Should you hit the group of jay-walkers or the single law-abiding person? What if the people in the group are kids and the single person is an adult? What if the person on the sidewalk is one of your friends? What if you could swerve around everyone and crash into a wall, but doing so would result in your own death? There are infinite mind-twisting variations of this problem (check out some of them here).

The main takeaway is that everyone’s choices are personal — they vary based on cultural differences and each individual’s identity. So is it even possible to program a machine to make these nuanced decisions? Who should have the power to determine how a machine ought to act in controversial situations?

Sustainability

Climate change is one of the most urgent crises facing our world, so it’s essential that we evaluate how every new technology can help or hurt the fight against it. Thankfully most autonomous vehicles are designed to be entirely electric, which means they won’t have to rely on toxic fossil fuels and will produce lower carbon emissions. That said, energy consumption will still remain high since the sensors and computers in the car will demand a lot of power.

Equity

Ensuring transportation equity, or equitable access to reliable and affordable transportation, is an often-overlooked aspect of autonomous vehicles. The poorest 20% of Americans spend over 40% of their income on transportation — autonomous vehicles could reduce these costs by providing cheaper, more efficient public transport. And elderly people/people with low mobility who are unable to drive themselves could use autonomous vehicles to get around. So self-driving cars clearly have the potential to help marginalized communities. The problem is that we’ll only see these kinds of benefits if these communities are represented among AV technologists and policymakers. If only the most privileged members of society are creating and using self-driving cars, then the needs of disadvantaged groups will likely be neglected. We’re already seeing this start to happen: our current object detection systems are significantly worse at detecting pedestrians with darker skin types. This means that today’s self-driving cars are more likely to hit Black pedestrians.

Economy

Autonomous vehicles have the potential to revolutionize our global economy by cutting down delivery costs and oil prices. This will be extremely beneficial to industries like mining and farming where goods need to be transported, especially since they’ll no longer need to pay human drivers. AVs will also promote lots of growth for automotive companies and encourage advancement in AI/ML research. But they’ll also come with major economic downsides. As we start to adopt self-driving cars, an estimated 300,000 driving jobs will be lost every year in the U.S. alone. The increase in demand for ML and mechanical engineers will come nowhere near to making up for the jobs lost, which means much higher unemployment rates. A disproportionate number of minimum wage drivers are people of color, so the job displacement will only further amplify racial disparities in income and living conditions.

Safety

There are some unique security challenges presented by AVs — imagine, for example, what would happen if the president’s car got hacked and driven off the road 😮. But overall, self-driving cars might actually be safer than human-driven vehicles. Let’s do the math: over 40,000 people die in the U.S, every year due to car accidents, and at least 94% of car accidents are due to human error. So 37,600 lives would be saved every year in just the U.S. by eliminating human error, which any fully operational self-driving car could do.

Conclusion + Next Steps

Ok, that was a lot. Where do you go from here?

If you want to help advance the development of AVs, we’d suggest building a strong technical foundation by:

  1. Focusing on your math classes (a solid understanding of calculus and linear algebra is key to understanding machine learning)
  2. Taking a Python course (there are some awesome beginner-friendly, totally free ones online like Codecademy that will give you a leg up when you start building your own machine learning models)
  3. (Get ready for a shameless plug 🔌) checking out ACM’s own resources, like our high school machine learning course and our recorded AI workshops!

To get more involved on the ethical side, you should:

  1. Spend time analyzing different technologies when you read about them — always ask yourself if anyone is being treated unfairly or left behind, and come up with creative solutions to problems that you see around you.
  2. (Shameless plug pt. 2) don’t forget to look over ACM’s resources — read our tech ethics blog and watch our recorded tech ethics workshops!

Clearly we have a lot of ethical questions to answer, societal impacts to consider, and technological advancements to make before we’re completely ready for autonomous vehicles. But if you’re interested in being part of the progress, the best time to start is right now.

This article was adapted from ACM’s AI Ethics Series: “Autonomous Vehicles” by Aman Oberoi and Nisha McNealis.

--

--

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store