Your Car Should Kill You

Manny Tee
6 min readOct 29, 2017

--

Today, scientists aim to bring cars to the road that can regularly drive safer than the human driver, without any human input needed. Through artificial intelligence and machine learning technology, as well as many high-tech sensors and radars, cars will be able to map the world around them for themselves and drive on its own while providing a safe riding experience for the passenger, getting you to your destination as if there were somebody driving you the entire time. While these technologies are far safer than the average everyday driver, they will still have their imperfections. In split-second, unpredictable scenarios jumping in front of them suddenly, the vehicle will need to make the very grave decision: does the car protect the public (such as steering to avoid a crowd of people walking across the streets, in a way that may harm the car’s occupants) or does it protect the occupants at the expense of those otherwise in harm’s way? Put simply, autonomous vehicles, or rather the automakers responsible for programming such cars, should be responsible for protecting the public at the expense of the owner or occupants, even despite the likely decrease in consumer desire to knowingly buy, own, and use a vehicle programmed this way.

Mercedez-Benz Self-Driving Concept Car

Christoph Von Hugo, Mercedes-Benz’s manager of driver assistance systems and active safety, has stated that the goal will be to save the passenger(s) every time. Hugo says “If you know you can save at least one person, at least save that one. Save the one in the car.” What Hugo says comes off as shocking to most ears. Perhaps, to somebody hoping to purchase an autonomous vehicle this may sound ideal, but to most people, to which this concept really is just an imaginary dream, this is astonishing and a statement like this would make most people want to stray as far away from the roads of the future as possible. To most people, this would be easy to counter with the simple statement of “I did not buy an autonomous car, I am simply walking down the street. Who has the right to program that I am to be hit by somebody else’s car?” But for the industry, this is way trickier than just that. The problem comes down to the fact that, should they program these cars to knowingly sacrifice occupants to protect the public masses, studies suggest that nobody would buy these cars, and we would be just as safe on the roads as we are today, where 1.3 million people die in automobile accidents per year. As the technology is so new, few automakers have even released details of their first autonomous cars. Hugo with Mercedes has been so far the only automaker representative to actually speak out about the matter, and since then Mercedes has dismissed his comments as his own and not even representative of the company’s opinion.

An MIT professor, Iyan Rahwad, has recently stated that in Danielle Muoio’s article about the ethical dilemma in self-driving cars on Business Insider from March of 2017 that the problem will never be solved by automakers saying that they will favor one group over another, rather that they will look to focus on maximizing safety and minimizing any life-threatening situations such as the ones presented in these sorts of debates. This seems to coincide with automaker actions, which mostly remain silent on the topic besides to say that they will be releasing autonomous vehicles and maximizing safety along the way.

Toyota Self-Driving Concept Car

While Hugo has argued this debate is easy to answer with protecting at least the individual(s) within the car and then focusing on protecting the rest of the public, researchers Leon Sütfeld, Gordon Pipa, Peter König, and Richard Gast of the University of Osharbrück now say they are working on studying specific human reactions to specific events and situations, and that soon they will be able to determine what is a common reaction to specific circumstances, and that they feel automakers should program cars according to what humans would do situationally. Marlene Cimons has released an article on Popsci about these scientist’s findings and how, despite this study’s use in determining what sort of programming consumers would or would not be comfortable with, regulation is likely to counter consumer comfort more often than not. The German Federal Ministry of Transportation has released a report with guidelines as to what is and is not acceptable within programming of self-driving cars, and it certainly complicates things further than the researcher’s findings. While in the scientist’s results, they found that the public favored children, the ministry has stated age is not to be taken into consideration, and that a child who chases a ball across the street is to blame and should be struck over an uninvolved adult walker on the side of the road. While perhaps looking at it from the aspect of what humans would typically do might provide more justification to the actions taken by a self-driving car, as well as offer more comfort to the consumers wishing to buy these cars, it is best put that the point of and aim of autonomous vehicles is to eliminate the human aspect to driving, and allowing a near-perfect machine to determine the actions.

While this argument could go on for an eternity, what would truly be best for society and provide a safer automotive world would be for engineers to work on limiting all unforeseen accidents and possible casualties in the first place, and beyond that, program to spare based on the number of lives at risk, without any other factors to play into it. As Edd Gent writes in his article on SingularityHub, the fact of the matter is that robotic cars are not human beings, and should not be forced to make human decisions. They should analyze data and use number crunching techniques that computers are known for. This would involve, using their sensors and computers, analyzing the probability of worse casualties with one maneuver versus another, and determining based on the highest probability of avoiding the most injuries and deaths. This may affect sales of these kinds of vehicles, as the goal would be to spare the masses, and the masses would likely not be those inside the vehicle, but it would really be a great benefit to society as a whole as it would introduce life-saving technology while also sparing the greater good, even despite the likelihood for those within the vehicle itself to be injured.

Of course it makes sense that automakers are weary of how programming and design will affect vehicle sales. But protecting as much of the public as possible is the right thing to do, and for automakers, doing the right thing should be top priority over selling their vehicle. If every automaker were to follow suit, it would very likely be more widely adopted by the masses, and the issue of sales of the vehicle could likely be out of the question. Automakers should more than anything push to release a vehicle that will make sure that the owners of these vehicles know what they are getting into ahead of time, ensuring these people are willing to take that risk and put their own life on the line for the greater good.

--

--

Manny Tee

Currently majoring in Business Management, plan to study and pursue a career in Automotive Engineering, writing about changing the world for cars and drivers.