What do we mean by safe? A case in Connected and Autonomous Vehicles

Marvin Mananghaya
Civic Analytics 2018
2 min readOct 26, 2018
Taken from GIPFY

One of the greatest questions surrounding Connected and Autonomous Vehicles (CAVs) is, are these vehicles safe? It’s no surprise that incidents such as the tragic incident last March 2018, may have curbed to some extent the exuberance surrounding these vehicles, starting to demand further for safer CAVs. In a recent symposium, Shladover (2018) talked about the challenges in implementing fully automated vehicles, the speaker particularly talks about the time and effort placed into quality assurance to be a great feat or the external threats that malicious entities can impose on these vehicles such as affecting the vehicle’s sensors.

But the understanding of acceptable safety may also be affected by one’s understanding of what is acceptable choices in societal norms. MIT media lab researchers had conducted an online survey regarding the options we take to acceptable outcomes should certain vehicular incidents take place. They found that certain societies will take act upon differently on acceptable risks (Hao ,2018). This begs the question, as much as safety is concerned, what do we mean by it if there seems to be broad definition surrounding this issue? We as society had adopted the use of automobiles, and we encounter incidents on a daily basis. Although this happens, we did not stop using cars, we have accepted an invisible social threshold to say we want to continue using them but also make effort to continue to improve their safeness. Still, as far as our concern when it comes to CAVS, unlike us humans that we’re able to define a situational acceptability of events, how do we define the parameters on what is acceptable for AIs to do?

It’s no secret that as a society we’ve developed certain guidelines and rules, concerning what we address as the gold standard of things. It is no different for technology as we’ve defined it countless times as we’ve resolved matters in order to continue progress. But when ethics and morality comes to play it, naturally becomes hazy but something to resolve should we pursue our current path. At the very least, like Isaac Asimov’s three rules of robotics, shouldn’t we also establish ones that governs CAVs? And should we also have a geographically based standards that governs the calculus of a CAV or a universal one atleast to establish as bare minimum?

Reference

Hao, K. (2018, October 25). Should a self-driving car kill the baby or the grandma? Depends on where you’re from. Retrieved October 26, 2018, from https://www.technologyreview.com/s/612341/a-global-ethics-study-aims-to-help-ai-solve-the-self-driving-trolley-problem/?utm_source=facebook.com&utm_medium=social&utm_campaign=owned_social&fbclid=IwAR1vdzu0uRBdrG-Y9jpfMYhZZmPtpsJBeSxjbGGBmuX3527naYj-X-WuIRU

Shladover, S. (2018, October). Practical Challenges to Deploying Highly Automated Vehicles. 6th NYC Symposium on Connected and Autonomous Vehicles, New York, NY.

Wakabayashi, D. (2018, March 19). Self-Driving Uber Car Kills Pedestrian in Arizona, Where Robots Roam. Retrieved October 26, 2018, from https://www.nytimes.com/2018/03/19/technology/uber-driverless-fatality.html

Three Laws of Robotics. (2018, October 20). Retrieved October 26, 2018, from https://en.wikipedia.org/wiki/Three_Laws_of_Robotics

--

--