Can Cities Trust Autonomous Cars?

By Sunil Paul

Transportation Alternatives
Vision Zero Cities Journal
6 min readOct 23, 2018

--

Imagine you are driving down a city street and a child chases a ball in front of your car. You can’t stop in time. Do you swerve into the opposite lane of traffic, and into the path of an oncoming truck?

This and other variations of the “Trolley Problem,” a 1967 thought experiment designed by British philosopher Philippa Foot, are on the minds of technologists, ethicists and policymakers as autonomous vehicles begin traversing streets across the world. Like a human driver, autonomous software will have to make decisions in a split second, including ethical decisions like the one from the experiment.

At least one automaker, Daimler, the maker of Mercedes-Benz, says their priority will be the safety of the passenger — as everyone in an autonomous car is a passenger. That means a future Mercedes-Benz vehicle would kill a child rather than risk injury to its occupants. Why would an automaker create such a system that seems so morally reprehensible? First, it is simpler. As cold as this sounds, engineering a system that takes into account all the possible scenarios adds complexity to an already complex project. Second, incentives are aligned for that outcome. Mercedes-Benz sells cars to car owners, not to pedestrians, children, or policymakers.

Tech Forward and Policy Back

Deciding how to steer our automated future is complicated and will require the thinking of technologists and policymakers alike, and recognizing two disparate worldviews. A technologist’s instinct is to push forward as fast as possible, while regulators’ instincts are to slow things down. As a Silicon Valley entrepreneur, I understand the techno-optimist view of the world. As an analyst at the Congressional Office of Technology Assessment, I also understand policymakers’ caution regarding unintended consequences. I learned firsthand how to walk the line of this tension by helping pass the first peer-to-peer carsharing law and then inventing ridesharing as co-founder and CEO of Sidecar.

Today, technology and car companies working on autonomous vehicles describe their design approach as being for vehicles that are lawabiding, ever-watchful, and “as paranoid as possible.”

Today, technology and car companies working on autonomous vehicles describe their design approach as being for vehicles that are law-abiding, ever-watchful, and “as paranoid as possible.” The software is programmed to be almost completely deferential to pedestrians. Most of the crashes of Google’s autonomous cars, for example, were rearend collisions by human drivers who were surprised by the “paranoid” behavior of the autonomous car.

Yet decisions like that of Daimler, and high-profile crashes, like the Tesla car in auto-pilot mode that crashed into a semi-trailer last year, killing its passenger, might cause a rush to regulate. Some states, like California, have taken a more aggressive regulatory route. Others, like Florida, Arizona, and Nevada, are promoting their states for autonomous systems and taking action to prevent their cities from independently regulating autonomous vehicles. But a majority of states have simply not taken any action.

The Big Picture

Autonomous cars and trucks could bring cities closer to Vision Zero than ever before. At this stage of the development of the technology, focusing on crashes is akin to concerns that preceded the introduction of airbags and seat belts. There was real worry that these life-saving technologies would harm children or spoil aesthetics long before they were widely tested, introduced or regulated. The bigger picture, however, is the potential for truly safe city streets for passengers, pedestrians, and cyclists. Policymakers could accelerate this future.

Imagine sections of cities where only autonomous vehicles are allowed. For the first time since the advent of the automobile, pedestrians and cyclists in those zones could be confident that vehicles would obey the law and yield to them. Jaywalking laws, which were created at the behest of automotive clubs and automakers, could be revoked in an autonomous vehicle zone. Some analysts have even used game theory to predict that pedestrians could become so confident interacting with autonomous cars that they will more aggressively assert their right to walk, making it harder for cars to move about a city.

Designing for Paranoia

We can predict that one day, autonomous vehicles will need zone-based regulation because of the nature of how autonomous systems are being developed. Fully autonomous vehicles that can replace a human driver, known as Level V autonomy, is many years away. A lower level of autonomy already in test deployment, Level IV, can drive without a driver, but only in certain situations. As a result, a ridesharing or delivery service might
use autonomous Level IV vehicles in certain areas, like low-speed city streets or standardized high-speed highways. A human driver would still be required for other zones or more complex driving.

The liability issues with autonomy are likely to keep autonomous vehicles paranoid and law-abiding. Volvo announced in 2015 that it will accept liability for the design of future autonomous systems, comparing them to brakes and other safety features. As lawsuits work through courts, automakers will likely shoulder at least some, maybe all, of the damages from mistakes made by their systems. Considering that auto liability, at about $200 billion per year in the U.S., is about the same as worldwide revenue for top automakers, autonomous vehicle designers will be strongly motivated to keep their systems paranoid.

When these court cases are prosecuted, regulators evaluating autonomous systems will face a challenge unlike today’s automotive software. The machine learning techniques that are used to program driverless cars, like neural networks and statistical systems, are effective because they can ingest large sets of data of possible scenarios that a car might encounter. But this training process makes it harder to predict how an autonomous vehicle would behave in the future, because not every scenario can be trained, and it can be impossible to extract exactly why a machine learning system performed in a particular way.

Traditional software, as well as certain types of machine learning, like rule-based systems, behave in a predictable way. This doesn’t mean there are no bugs in that software, but that there are predictable ways to find flaws and fix underlying problems. When new machine learning systems have bugs, it is difficult to find them and then fix them. Even worse, it is difficult to confirm that the bug has been fixed.

Research is underway to enable neural networks and statistical systems to explain how they behave. The tools will be important for the effective regulation of autonomous vehicles. There is a role for policymakers to encourage this future with funding for research and development, incentive prizes, and collaboration with researchers.

Policy for Tomorrow

Today, regulators are reevaluating rules and laws designed for human drivers with the knowledge that autonomous vehicles will be programmed to follow the letter of the law. Policymakers control a very powerful form of “code” — the laws and regulations that we already use to weigh ethical and value considerations. Neither an autonomous vehicle nor its creator should be asked to weigh the Trolley Problem, or any other complex ethical trade-offs. We already have institutions like courts, legislatures, and regulators to arbitrate justice and weigh decisions of right and wrong.

Neither an autonomous vehicle nor its creator should be asked to weigh the Trolley Problem, or any other complex ethical tradeoffs.

In the example of the child chasing the ball, it should not be up to an automaker to decide whether or not it is appropriate to cross a double yellow line to save a life. Automakers will need laws and policies that can parse these complex situations. If autonomous systems fail to keep pedestrians safe while following the laws we have today, regulators can change those laws, and software makers will have to comply.

Autonomous vehicle technology will let us reimagine our world, especially our cities and suburbs. Since World War II, we have designed around the mechanical constraints of the automobile. Now we are designing with the constraint of software, not engines, brakes, and steering wheels. As people who seek Vision Zero in our cities, we will have to bolster the courts, regulators, and legislatures that will allow us to trust autonomous vehicle systems and guide driverless technology to be what we — and not just what the owners of autonomous cars — want.

[This article first appeared in Transportation Alternatives’ Vision Zero Cities Journal in 2017.]

Sunil Paul is an entrepreneur and investor who loves innovation and making ideas a reality. He co-founded and ran Sidecar, which invented ridesharing. He has had many successful companies, including the early internet company FreeLoader and anti-spam leader Brightmail. He has also been active in policy, starting as an analyst at the Congressional Office of Technology Assessment. He helped pass the first P2P carsharing and helped shape the first ridesharing regulations. He also has led efforts to understand and promote solutions to climate change.

--

--

Transportation Alternatives
Vision Zero Cities Journal

Transportation Alternatives is your advocate for walking, bicycling, and public transit in New York City. We stand up for #VisionZero & #BikeNYC.