Reading 12: Self-Driving Cars

The motivation for developing and building self-driving cars is largely focused on three things: convenience for the driver (who would not actually be driving anymore), safety for those both inside and outside of the car, and potential economic benefits. Obviously, people would like to be able to turn on autopilot and get to their destination without having to navigate there themselves, but in addition, a road that is filled with robot-driven cars that can communicate with each other would, theoretically, be much safer than a road filled with human-driven cars that is prone to human error and road rage. Additionally, ride-sharing service companies like Uber and Lyft, as well as shipping companies (or companies that ship goods using semi trucks), are heavily pushing for self-driving cars, as this would allow them to cut costs by not paying the wages of their drivers. However, there are several arguments against self-driving cars. Opponents are skeptical about the safety, saying that self-driving cars may not be safer than humans and may have trouble interacting with other human drivers. Additionally, driverless cars could make people overly reliant on technology and complacent. Another argument against self-driving cars concerns insurance, liability, and accountability in the event of an accident; more specifically, it deals with the question of who is responsible when a self-driving car collides with something or someone. Personally, I believe that self-driving cars can be much safer than humans, but that may not be the case right not due to the vast majority of cars today being human-driven. Self-driving cars would perform better in an environment that is more predictable, and thus where robots drive a majority of the cars.

The readings discuss the social dilemma of autonomous vehicles, which can be summarized by saying that people who are surveyed about self-driving cars would, in an unavoidable accident situation where human life is lost, choose the utilitarian option, even if that means that the driver and/or passengers will be killed. However, these same people often answered that they would not want a self-driving car that, in certain situations, would not prioritize the lives of those inside the car. Programmers, when addressing this dilemma, should emphasize that in situations where human life must be lost, everyone’s goal should be minimizing the loss, and thus looking at these incidents with a utilitarian lens makes much more sense. An artificial intelligence should approach life-and-death situations in a similar manner: Evaluate all possible outcomes of the situation and choose the path the leads to the least amount of human death. This may result in the deaths of those in the vehicle, but they knowingly accepted the risk, along with every other person in self-driving cars. When you are looking at liability after an accident, the important thing to evaluate is whether or not the autonomous vehicle did its job and minimized the risk. If it did, then you cannot blame the company that designed the car, and instead must look into the circumstances of the accident and see which party was responsible for setting the events in motion. If, however, the self-driving car makes a clearly wrong decision, then there should be an audit process of why it malfunctioned and the company that designed the car should be liable.

Socially, I think people will mistrust self-driving cars for a while and the transition will be slow, but once trust starts to grow self-driving cars will quickly become the majority of vehicles on the road. Because of this, I think its possible that new generations of drivers may not even have to learn how to drive a car, and people will become much more reliant on technology. Economically, this will hugely benefit ride-sharing service companies like Uber and Lyft, as well as companies that ship goods using ground transportation, as they will no longer need to pay drivers. However, this will also cause people to lose jobs because certain jobs, such as semi truck drivers, will become obsolete with this new technology. Politically, self-driving cars will cause a massive impact, as legislation will need to be implemented and evolved in order to properly regulate the design and use of self-driving cars. I think the federal government must establish rules and guidelines that auto companies need to follow for the design of their self-driving cars, and the 15 benchmarks put in place by the Obama administration is a good start.

Personally, if given the opportunity, I would absolutely want a self-driving car in the future, so long as there has been a lengthy period of testing beforehand and a majority of the other cars on the road were self-driven. I wouldn’t use the self-driving technologies available today because I don’t trust where the technology is currently at, but the convenience of being able to do work on a commute rather than having to keep up with or slow down with traffic is very attractive.

--

--