Reading 12: Self-Driving Cars

Alejandro Rafael Ayala
Ayala Ethics Blog
Published in
3 min readNov 20, 2018

The biggest motivations for building self driving cars are that they would make the roads safer and would probably be incredibly convenient. In defense of autonomous vehicles, “one common rationale for autonomous vehicles is the massive increase in safety they could provide. More than 37,000 people were killed in car crashes in America in 2016. Since more than 94 percent of crashes are caused by driver error, replacing fallible humans with reliable machines seems like an obvious net benefit for society” (Bogost). Obviously, an AV would be designed with specific tasks in mind, so their functionality shouldn’t be inhibited by human temptations/needs (like drunkenness and sleep deprivation) which fall out of their design. Granted, while they could be susceptible to hacking, that would probably be less likely than a drunk person getting into their car. The case against automating cars is that the cost of improving overall safety is people dying. “A hundred or 500 or a thousand people could lose their lives in accidents like we’ve seen in Arizona” (Silverman) and “deaths like Herzberg’s are the price of progress” (Silverman) according to innovators like those at Toyota. Sure, in the future, self driving cars may be good enough to save lots of lives which otherwise may have been lost to human error, but the unfortunate cost is other human lives. Utilitarianism is controversial at best, and this is the moral approach self driving cars require to justify these decisions, especially as they pertain to innocent and law abiding citizens’ lives. Overall, they probably would make our roads safer, but the ethics of it is where it gets complicated. I think that Bogost’s articles made very good and interesting points against simply using the trolley problem as the standard for asking moral questions about self-driving cars. As he says, “it does a remarkably bad job addressing the moral conditions of robot cars, ships, or workers, the domains to which it is most popularly applied today” (Bogost). Programmers should be approaching the “social dilemma of autonomous vehicles” by considering what their own personal ethics are on the real life situations where potential victims aren’t all the same. Honestly, I’m glad that I’m not going to be working on self driving cars. I have no idea what my stance is on artificial intelligence in life and death situations because I can’t truthfully say that I could bring myself to program a car to tell it who to hit should the need arise. The only thing that may help that decision is if I had insight into the person’s virtue and moral compass in seconds, but I don’t think that would realistically be possible. Even if it were, I still don’t think I could bring myself to tell the car it should kill the “worse” person. A large part of this uncertainty as well is due to the fact that I believe the designers are at least decently at fault when any accident happens. However, I do think there are certain situations where the companies deploying the technology and government officials are at fault like in the Arizona case as safety measures and community awareness certainly could have been better and perhaps helped to prevent the accident. If self driving cars become a common thing, it will take a huge number of jobs away from car based professions like taxi drivers and truck drivers. Who knows whether those people will be able to find employment in another industry? Especially if those drivers are already relatively old and have less time to change industries. It would probably save money over time for companies too as they wouldn’t have to pay for as many drivers and all the benefits they are entitled to as well. Since self driving cars is a relatively new domain, new laws will need to be created to help regulate their development and use. As such, I personally think that there should be a lot of government regulation with this emerging technology, especially since it has ventured into the domain of life and death. Do I want a self driving car now? No, I don’t think I’d want a self driving car. Don’t get me wrong. I really hate driving, and it’s nice to just be a passenger and sleep on a long trip. However, I just don’t trust autonomous vehicles yet with my own life. For now, I like to have a sense of control. Perhaps way later in the future when the kinks are more ironed out, I would trust them enough with my life and want one.

--

--