Jailbreaking Morality: The Philosophy of Self Driving Cars

Nitesh Dhanjani
16 min readJul 3, 2018

--

Today, the phrase “self driving cars” projects a future where cars will drive themselves. In the future, the same phrase will mean the cars of yesteryear that we had to drive ourselves.

Autonomous vehicles are computers we will put ourselves inside of, and we will depend on them to make our lives safer. These vehicles are crafted by the works of engineers, physicists, and mathematicians — indeed, it is the accuracy of the works of these individuals on whom we will entrust our safety. Upon achievement of our quest, non-autonomous vehicles are likely to be outlawed on public roadways given the perversity of the popularity of fatal car accidents because of human error. Designated private areas will let manual car drivers carry out their hobby, likely to be perceived similarly to designated smoking rooms at airports — “those weird people huddled together engaged in risky endeavors”. We will look back in time and perceive human car drivers with similar puzzlement as we do of elevator operators of the past.

Figure 1: Tesla will not allow it’s autonomous driving functionality on competing ride share networks

Ride share apps like Uber and Lyft will swiftly embrace self driving cars. This will in turn lower the cost of rides to the point where the efficiency of hailing an autonomous car will make less people purchase their own vehicles. Tesla, however, has a competing business model where the hope is that the car will switch into taxi mode to make money for the owner while she is busy at work (Figure 1). Either way, plot twists on the concept of sole car ownership is upon us in the future.

I have written about software and architectural vulnerabilities in car systems and networks in Chapter 6: Connected Car Security Analysis — From Gas to Fully Electric of my book Abusing the Internet of Things: Blackouts, Freakouts, and Stakeouts. These types of security vulnerabilities are a serious risk and we must strive for further improvement in this area. The scope of this article, however, is to focus on risks that come to light in the realm of cross disciplinary studies — upcoming threat vectors that are rooted in the understanding of the design of these vehicles, rather than the applying of well known threat vectors to autonomous car design.

Indeed, the secure design of autonomous vehicle software calls for a polymathic thinking, a cross-disciplinary approach that not only invokes the romance of seeking out new knowledge, but also applying a holistic framework of security that includes the induction of new attack vectors that go well beyond comprehending traditional security vectors as they may apply to autonomous software.

Polymathic thinking calls upon designers to bring together realms of philosophy, economy, legalese, and socio-economic concerns, so that we can align these areas to the concerns of security and safety. As designers and citizens, cross-disciplinary conversations are the spark we need to achieve efficiency and safety from autonomous vehicles. This article series is an attempt to ignite that spark, which we begin by tackling the issue of morality and how it will relate to self-driving cars.

The Trolley Problem

Airline pilots can be faced with emergency situations that require landing at the nearest airport. Should the situation be that returning to nearest airport isn’t feasible, alternative landing sites such as fields or rivers may be an option. Highway roads, albeit hazardous given powers lines, oncoming traffic, and pedestrians, may still be an option for smaller planes. The 2-D nature of car driving, on the other hand, mostly lend to a brake or swerve split second decision on the part of the driver when it comes to avoiding accidents. In many car accidents, drivers don’t have enough time to survey the ongoing situation to make the most rational decision.

When it comes to conversations on avoiding accidents and saving lives, the classic Trolley Problem is oft cited.

Figure 2: The Trolley Problem

Wikipedia describes the problem justly:

There is a runaway trolley barreling down the railway tracks. Ahead, on the tracks, there are five people tied up and unable to move. The trolley is headed straight for them. You are standing some distance off in the train yard, next to a lever. If you pull this lever, the trolley will switch to a different set of tracks. However, you notice that there is one person tied up on the side track. You have two options:

Do nothing, and the trolley kills the five people on the main track.

Pull the lever, diverting the trolley onto the side track where it will kill one person.

Which is the most ethical choice?

The utilitarian viewpoint will deem it just to pull the lever because it minimizes the amount of lives lost. A competing viewpoint is that pulling the lever constitutes an intentional action that leads to the death of one individual while not doing anything does not actively contribute to the five deaths that would have happened anyway. The act of pulling a lever to save more lives makes some of us uncomfortable because we are actively involved in an action that kills a set of lives.

There are many other variants of the Trolley Problem that have been put forth as thought experiments, yet they are useful in arguing moral decisions that must be made by software developers who code self driving software. There are other issues besides the trolley problem that are at play — a vehicle veering of a cliff because of a bug in software code and killing the passengers. Our quest for self driving cars will get us to a world where less people die due to car accidents, yet some people will still perish for reasons such as software bugs. Who then must be held responsible for accidents and deaths? The individual developer who developed that specific piece of fault code? The car company? Legal precedence is unlikely to allow commercial companies to offload legal repercussions to the car owner for the fact that the owner has lost autonomy by virtue of the self driving capabilities.

Rodney Brooks of MIT dismisses the conversation on the Trolley Problem pertaining to self driving vehicles as “pure mental masturbation dressed up as moral philosophy” in his essay Unexpected Consequences of Self Driving Cars, Brooks writes:

Here’s a question to ask yourself. How many times when you have been driving have you had to make a forced decision on which group of people to drive into and kill? You know, the five nuns or the single child? Or the ten robbers or the single little old lady? For every time that you have faced such decision, do you feel you made the right decision in the heat of the moment? Oh, you have never had to make that decision yourself? What about all your friends and relatives? Surely they have faced this issue?

And that is my point. This is a made up question that will have no practical impact on any automobile or person for the foreseeable future. Just as these questions never come up for human drivers they won’t come up for self driving cars. It is pure mental masturbation dressed up as moral philosophy. You can set up web sites and argue about it all you want. None of that will have any practical impact, nor lead to any practical regulations about what can or can not go into automobiles. The problem is both non existent and irrelevant.

The fallacy in Brooks’ argument is that he does not take into account the split second decisioning humans are incapable of when it comes to car accidents. The time taken by our brains to decide what direction to swerve the car and hit the brakes is too long. On the other hand, sensors in autonomous vehicles have the capacity to categorize data from sensors to make decisions within milliseconds.

On March 18, 2018, an Uber autonomous test vehicle struck a pedestrian who died from injuries. The Uber vehicle had one vehicle operator in the car and no passengers. The preliminary report from the National Transportation Safety Board (NTSB) states:

According to data obtained from the self-driving system, the system first registered radar and LIDAR observations of the pedestrian about 6 seconds before impact, when the vehicle was traveling at 43 mph. As the vehicle and pedestrian paths converged, the self-driving system software classified the pedestrian as an unknown object, as a vehicle, and then as a bicycle with varying expectations of future travel path. At 1.3 seconds before impact, the self-driving system determined that an emergency braking maneuver was needed to mitigate a collision. According to Uber, emergency braking maneuvers are not enabled while the vehicle is under computer control, to reduce the potential for erratic vehicle behavior. The vehicle operator is relied on to intervene and take action. The system is not designed to alert the operator.

It is clear from the NTSB report that Uber’s autonomous software classified the pedestrian more accurately as it approached, an “unknown object” followed by “a vehicle” and then as a “bicycle” — which is accurate because the victim of the accident was crossing the road with her bicycle. The emergency braking system was disabled in this case ultimately leading to the accident. The car did not even alert the driver (by design). It is not clear yet as to when the vehicle would have started braking (6 seconds prior versus 1.3 seconds) had the automatic braking feature been enabled. Nonetheless, if the system had been enabled, the software would’ve had to make the call on when to apply the brakes, perhaps a combination of manual tuning and machine learning.

Machine learning systems are able to classify objects in images with impressive accuracy: the average human error rate is 5.1% while machine learning algorithms are able to classify images with an error rate of 2.251%. The self driving Uber was probably using a combination of Regional Convolutional Neural Networks to detect objects in near real-time. It is unknown what classification or segmentation algorithms were employed in the case of the accident, and there are a lot more algorithms in scope of a self driving car than object classifiers — yet, it is evident that the hardware and software technology in self driving cars surpass the physics of human senses.

We need to bring the issue of machine decisioning into the forefront if we are going to make any headway towards making our autonomous vehicle future safe. Brooks’ argument dismisses the need for such decisioning outright, yet we have evidence today that demonstrates that this is one of the more important issues we ought to make sure we solve in a meaningful manner. Brooks is right in saying that humans in control of a car almost never have the ability to decide upon who to drive into and kill, but his argument doesn’t account for the technical abilities of autonomous car computers that will make it possible for software to make these decisions.

Back to the topic of the Trolley Problem, engineers must account for decisions when a collision is unavoidable. These decisions will have to select from predictable outcomes, such as steering the vehicle to the left to minimize impact. These decisions will also include situations that could save the lives of the car passengers while impacting the lives of people outside of the vehicle, such as pedestrians or passengers of another vehicle. Should the car minimize the total loss of life, or should the car prioritize the lives of it’s own passengers?

Figure 3: MIT’s Moral Machine

The Moral Machine project at MIT is an effort in illustrating moral dilemmas that we are likely to face and have to “program in”. Their website includes a list of interactive dilemmas relating to machine intelligence (Figure 3).

Imagine a case where the car computes that an collision is imminent and it has to swerve to the right or to the left. The sensors of the car quickly recognize a cyclist on the right and also to the left, the difference being that the cyclist on the left is not wearing a helmet. Should the car be programmed to swerve left since the cyclist on the right is deemed “more responsible” because he is wearing a helmet (and who must conjure up this moral calculus?)? Or should it pick a side at random? Autonomous cars will continuously observe objects around them — what of the case where the car is able to scan the license plate of a nearby vehicle and classify bad drivers versus good based on collision history? Perhaps this information could be useful in navigating around potential rogue drivers that demonstrate evidence of bad driving history, but should the same information be leveraged to decide who to collide into and kill should an unavoidable collision occur?

Make it Be Utilitarian (But Not My Car)

On the topic of collision decisioning, does the general population of today prefer a utilitarian self driving vehicle? Jean-François Bonnefon et al., in their paper The social dilemma of autonomous vehicles, came up with the following analysis:

Autonomous Vehicles (AVs) should reduce traffic accidents, but they will sometimes have to choose between two evils-for example, running over pedestrians or sacrificing itself and its passenger to save them. Defining the algorithms that will help AVs make these moral decisions is a formidable challenge. We found that participants to six MTurk studies approved of utilitarian AVs (that sacrifice their passengers for the greater good), and would like others to buy them, but they would themselves prefer to ride in AVs that protect their passengers at all costs. They would disapprove of enforcing utilitarian AVs, and would be less willing to buy such a regulated AV. Accordingly, regulating for utilitarian algorithms may paradoxically increase casualties by postponing the adoption of a safer technology.

The findings are not surprising. It is straightforward to grasp the utilitarian view point from an intellectual perspective, yet all bets are off when the situation includes ourselves and our loved ones. Yet, the design of autonomous vehicles has spawned a moral challenge that humankind has not faced in the past: that we must optimize our solution to this moral dilemma, design for, and operationalize on unconscious hardware a system that will decide who is worthy of living.

In the absence of federal regulations, car companies may choose the owner to select from various types of decisions, or perhaps manufacturers may offer the prioritization of passenger lives as part of a luxury upgrade package, skewing favor towards population that is able to afford it. Figure 4 depicts a mockup of the Tesla iPhone app allowing the owner to change the setting on or off.

Figure 4: Mockup of Tesla’s iPhone app depicting Utilitarian Mode as a setting

It is plausible to imagine federal regulations that compel that a utilitarian mode be permanently in effect. In such a world, the incentive for car owners to ‘jailbreak’, i.e. subvert the factory default software, will be high so as to prioritize the protection of their own lives. This sort of approach to

jailbreaking can extend to protocols designed for cooperation — for example, two cars halting at a stop sign simultaneously. An industry accepted protocol could propose a simple solution (in the case of 2 cars) where the cars engage in a digital coin toss and the winner gets to go first. If people were to jailbreak their car software to subvert this functionality and always go first, the situation could lead to confusion and perhaps collisions if every other car owner were to circumvent protocols in the same way.

Lessons of Pegasus

The term ‘jailbreak’ was coined by communities that have work to modify the iOS operating system that powers iPhones and iPads. Apple has asserted tight controls on their devices that people in the jailbreak community wish to circumvent so that they are able to further customize and add features to their devices that are not offered by Apple.

Figure 5: Apple’s warranty does not cover issues caused by jailbreaking

From Apple’s vantage point, modification of core operating system code can lead to adverse effects and become unsustainable for Apple to keep track of or be responsible for changes made by third-parties. Additionally, even though many jailbreaking tweaks offer security features, they overtly trespass on fundamental security controls put in place by Apple, thereby putting the jailbroken device at additional risk.

Known security vulnerabilities in iOS are needed to develop and execute a jailbreak. These vulnerabilities allow for unauthorized source code to be executed by the iPhone and iPad. At Apple’s World Wide Developer Conference in 2016, Ivan Krstic, head of Security Engineering and Architecture at Apple, estimated that jailbreakers and hackers generally have to find and exploit between 5 to 10 distinct vulnerabilities to be able to fully exploit the inherent platform security mechanisms. Furthermore, he pointed out that the ongoing blackmarket rate of a remotely exploitable vulnerability (that can lead to a jailbreak) was estimated to be around $1 million (Figure 6). This, compared to the cost of similar vulnerabilities for other popular operating systems, suggests that Apple’s anti-jailbreak mechanisms and platform security features are harder to exploit.

Figure 6: Apple estimates it’s remotely exploitable vulnerabilities are worth $1 million

In 2016, the security community was alerted to a sophisticated iOS spyware named Pegasus that was found by Bill Marczak and engineers at the security company Lookout. An activist friend of Marczak located in the United Arab Emirates forwarded him a suspicious SMS message he received that contained an Internet link that when clicked led to the immediate installation of spyware. Upon analysis, it became evident that this spyware leveraged three vulnerabilities in iOS to remotely exploit an iPhone and gain full control. Numerous attribution theories circulate this incident, the most notable being the NSO Group, an Israeli spyware company. Researches found references to NSO in the source code for Pegasus, along with evidence that in addition to the targeting of Ahmed Mansoor in the UAE, the exploit was also targeted towards Mexican journalist Rafael Cabrera, and quite possibly additional targets in Israel, Turkey, Thailand, Qatar, Kenya, Uzbekistan, Mozambique, Morocco, Yemen, Hungary, Saudi Arabia, Nigeria, and Bahrain.

Remotely exploitable vulnerabilities in iOS are sought after not only because iPhones and iPads enjoy a healthy market share, but also because finding these vulnerabilities is harder in Apple’s products. In Apple’s iOS Security Guide document, the emphasis on system security is emphasized, i.e. utmost care is taken to make sure that only authorized source code is executed by the devices and that various security mechanisms work in tandem to make remotely exploitable conditions difficult.

In my book Abusing the Internet of Things, I have outlined the nature of the Controller Area Network (CAN) architecture in cars, which in essence is like a computer network where all other physically connected computers are fully trusted. Electronic Control Units (ECUs) are various computers in the car that relay various sensor information as well as command other ECUs to take specific action. Traditionally, the attack vectors targeting such an architecture has required physical access to the car. With the prevalence of telematics that employ cellular communications that essentially puts modern cars on the Internet, the CAN architecture is not sufficient to provide reasonable security assurance, i.e. should an external hacker be able to hack into the car by exploiting a flaw in the telematics software, she could then remotely control the rest of the car. Such as scenario can pose an exponential impact should the attacker choose to infect and command cars en masse. Elon Musk has publicly stated that such a fleet-wide hack is one of his concerns.

As with iOS devices, remotely exploitable vulnerabilities can not only allow hackers to access and command the infected device, but also to jailbreak the device to subvert functionality. Circling back to our discussion on “programming in” moral rulesets per federal regulations, security vulnerabilities can allow individuals to jailbreak their autonomous vehicles and bypass these controls.

The rumor of Apple building an autonomous vehicle has been upon the media for a few years now. A case could be made, albeit speculatively, that Apple may have an advantage of successfully operationalizing architecture that makes it difficult to bypass security controls that are built in to the product. In more tangible news, companies such as General Motors has appointed executive roles to foresee the secure design and architecture of vehicles.

An argument can be made in the favor of vehicle jailbreaking in terms of humanitarian situations where journalists may be assigned vehicles that prohibit access to certain areas. These situations will have to be carefully weighed agains the double edged nature of implementing security mechanisms that are hard to circumvent.

Relentless Optimism

The prevalence of autonomous vehicles is going to bring about moral dilemmas into our lives that have traditionally been confined to the province of academic contemplation. The transformative and disruptive nature of these technologies are bound to ignite legal discussions and precedences that may advance or even temporarily slow down adoption of self driving cars.

The problem of Risk Bias constantly looms around our everyday misconceptions: we have a 1 in 11 million chance of being killed in an airplane crash, compared to a 1 in 5,000 chance of being killed in a car accident. The World Health Organization reports that more than 1.25 million people die each year as a result of road traffic crashes.

The compute power of self driving cars will put is in a position to lower the death rates due to vehicle collisions, yet we are bound to be faced with deaths due to unavoidable collisions. In other words, less people will die, but they will die for reasons that are uncommon to our emotional faculties such as software bugs, non-compliance due to circumvention of programmed moral controls, unfair moral controls and the lack of regulation, and many unforeseen reasons that we will uncover.

The status quo of 1.25 million global deaths due to road traffic crashes is not acceptable. Add to this number the countless suffering of people that are injured in countless crashes. Not to mention the countless hours of time spent in commute that can be utilized by people to instead spend time doing constructive things and having meaningful conversations. It is clear that our advancements in technology is the way to achieve improvement that will benefit us greatly, and while we may have misgivings on our way to success, the notion that we are moving towards betterment ought to fill us with unbounded and relentless optimism for the years ahead.

--

--

Nitesh Dhanjani

Nitesh Dhanjani is a well known cybersecurity researcher, author, and speaker.