Driverless Cars face ethical issues

Avery Pierson
Writing for the Future: AI
4 min readAug 2, 2018

By Avery Pierson

Pictured is Google’s self driving car, Waymo. Photo courtesy of: the Verge

Elaine Herzberg was simply walking her bike across a crosswalk when a car lost control and hit her. But this wasn’t an ordinary car.

This car was Uber’s self-driving car. This tragic incident — which happened on March 19 in Tuscon — raises many ethical questions about self-driving cars, like what the cars will do when they have to perform tasks like going through crosswalks.

However, because companies are working to solve these issues, the cars still have the potential to, one day, be used by everyone.

Self driving cars use artificial intelligence software to learn how to drive on their own. The goal is to have cars on the road that can drive with no human interaction; this is what tech companies, such as Google and Uber, are working everyday to achieve.

“The Uber crash demonstrates that the mundane, everyday situations at every crosswalk, turn and intersection present much harder and broader ethical quandaries,” said Johannes Himmelreich, an ethics fellow at Stanford University.

The crosswalk, though it is something that humans are able to do everyday, is very difficult for the driverless cars.

“Even if visibility at the crosswalk is limited and it is sometimes hard to tell whether a nearby pedestrian actually wants to cross the street, drivers cope with this every day,” said Himmelreich.

When a person is crossing a crosswalk, they will make eye contact with the driver to signal that the person is going to cross and the driver should stop. With driverless cars, there is no driver to make eye contact with. This makes it much more difficult for a driverless car to safely go through a crosswalk.

Another dialema is the balance between safety and mobility. For example, according to Himmelreich, when a self-driving car goes faster than average walking speed, if a child runs into the middle of the street, the car will not be able to prevent itself from hitting the child.

This is an obvious dilemma because walking pace is, of course, too slow, but the car needs to be able to stop if an incident, like a child running into the street, occurs.

There are also the ethical issues that are not mundade, issues that humans do not deal with everyday. For example, if a child runs in front of a self-driving car to get a ball that had bounced into the street, should the car risk the life of the passengers by swerving off to the side where there is a cliff, or should the car continue to drive ahead, risking the safety of the child?

“This scenario and many others pose moral and ethical dilemmas that carmakers, car buyers and regulators must address before vehicles should be given full autonomy,” according to a study published in Science.

Most of the researchers that were involved in the study thought that the car should put the safety of the pedestrians first; so the car should swerve to avoid hitting the pedestrian, even if it puts the car’s passenger’s life at risk.

“The algorithms that control autonomous vehicles will need to embed moral principles guiding their decisions in situations of unavoidable harm,” according to the researchers at Massachusetts Institute of Technology, the University of Oregon, and France’s Toulouse School of Economics for the National Center for Scientific Research.

However, many of the study’s participants said that when people are buying a car, they will want to buy a car that ensures their own safety, not someone else’s. They said that if laws were made to protect the safety of pedestrians, then consumers would not trust the car, and would, therefore, not buy it.

“A shrinking market for driverless cars would slow their development despite research showing that autonomous vehicles could potentially reduce traffic, cut pollution and save thousands of lives each year — human error contributes to 90 percent of all traffic accidents,” according to Larry Greenemeier, the associate editor of technology for Scientific American.

Even though driverless cars have the potential to be safer and more convenient than human drivers, if people will not buy the cars, then, ultimately, the development and production of the self-driving cars will decline.

Some researchers say, however, that dilemas like the child running after the ball, where the car chooses to either sacrifice one person for the good of a group or protect an individual at the expense of the group, do not bear in mind how the car actually works.

“This question of ethics has become a popular topic with people who don’t work on the technology. AI does not have the same cognitive capabilities that we as humans have,” says Ragunathan Rajkumar, a professor of electrical and computer engineering in Carnegie Mellon University’s CyLab.

Instead of making decisions based on moral implications, they make decisions based off of data, such as landscape, weather, road conditions, and other information the cars get from its sensors. The car uses all of the data to decide what to do. Researchers say that the problem that needs to be solved is making sure the car has enough data to avoid dangerous scenarios in the first place.

Rajkumar is also concerned that the cars have the ability to be hacked. If someone is able to hack into a driverless car, they have control of everything, putting the passengers, pedestrians, and surrounding cars in danger. “The bigger concern I have about autonomous vehicles is the ability to keep them protected from hackers who might want to take over their controls while someone is onboard,” Rajkumar adds.

Currently, self-driving cars are not ready to be put on the roads because they are not as safe as human drivers yet. But tech companies are making improvements to the cars all of the time. So, hopefully, in the future you can get into your car and take a nap while your car drives you to your destination.

--

--