Can Cheating Ethics Make Autonomous Vehicles Viable?

Jeff Heinzelman
The Startup
Published in
7 min readAug 7, 2020

Effortless, automated, and personalized travel that is on demand and safe for the world has been the dream for many drivers since the inception of mass produced vehicles. The current automotive ecosystem, made of incumbents and disruptive entrants, is advancing technology to realize the dream but are slow in coalescing offerings with broad artificial intelligence capabilities. For the last decade, pundits have cast autonomous driving obstacles as a perfunctory matter, proclaiming we will all be taking naps on the way to our destinations as our role in transit become that of a “permanent backseat driver”. As these predictions suffer inevitable delays, forecasts are revised and the AV remains a futuristic target. Before we can achieve a ubiquitous driverless, accident-free wonderland, we must overcome persistent issues holding back adoption. Primary obstacles to an autonomous nirvana; (1) enabling vehicles with acceptable decision-making capabilities and driving technologies, and (2) rationalizing perennial issues surrounding the agency of artificial intelligence.

The theoretical nature of the trolley problem is one place to start when exploring AI operating in a human world, but is it enough?

A popular academic approach is to explore these challenges is via the trolley problem, a thought experiment used in ethics and leveraged to present a no-win scenario that AVs might face in practice. The experiment often arises in creator circles and more recently consumer circles as a method to frame the ethics for the design of autonomous vehicles. As the promise of AV technology has become more realistic, the trolley problem and other variants have been called into service as a way to research moral conundrums humans perceive in enabling AI as their driver. These methods are important because they often identify well-known sources of ethical theory and can be helpful in establishing new moral practices.

Practical experimentation with the Trolley Problem while interesting, still isn’t enough to flesh out the most tangible issues facing autonomous vehicles.

Unfortunately, this approach can also limit a productive discussion, no-win scenarios are just that, unwinnable and lacking a practical resolution. While this approach is important to constructive dialog, it can also incite uninitiated activists to boil an ocean of ethical issues instead of isolating pertinent issues to mobility and limiting experimentation and iteration within AV.

Academic exercises like the trolley problem motivates us to question whether we should continue to limit ourselves exclusively to formal ethical models as ways to contemplate AV and solve the issues discussed above. In his book, A Theory of Justice, John Rawls posits that morality issues, like the one explored within the trolley problem, places us behind a “veil of ignorance” and limits the consideration set to solve a moral problem. With the trolley problem, the decision maker has limited information about potential victims affected by their choice. The handful of situational factors provided in a classic trolley problem may limit the stakeholders to a single person, the driver, and leaves out the other actors in the scenario as well as any outsiders that might provide input if they were able to weigh in.

The Problem With The Trolley Problem

The trolley problem is a tool for evaluating the ethics of AVs. The hypothetical dilemma is a thought exercise which presents a set of mutually conflicting yet dependent conditions around an autonomous vehicle, presenting a no-win scenario. There are no right answers pre se but the scenario can provoke rational and irrational responses from participants. In this exercise respondents are encouraged to find a solution and often will posit out of the box conditions that often “break” the simulation.

A similar, imaginary no-win exercise can be found in science fiction; the Kobayashi Maru. This problem is detailed in Star Trek lore via a training exercise and test of character for Starfleet officers. The simulation involves the rescue of a disabled federation ship, the Kobayashi Maru, from a demilitarized area of space adjacent to a notorious enemy; the Klingons. The captain of this digital rescue ship has two choices, enter the neutral zone to attempt rescue, triggering a treaty violation and guaranteeing a deadly retaliation and interstellar war. The other choice; leave the shipwrecked crew to face a certain death but avoiding war and guaranteeing the safety of the starfleet crew. Captain James T. Kirk famously took the test three times and in his last attempt, secretly reprogrammed the simulation to enable a narrow window to save the disabled ship and its crew.

Kirk cheated, he changed the variables so that a winning scenario could be achieved. As part of the story, Kirk is even awarded a commendation for altering the conditions of the test, lauded for “original thinking.” When criticized later for never having to face a no-win situation Kirk waxes his philosophy; he doesn’t believe no-win scenarios are realistic and a solution is always achievable. A counter argument is proffered by his friend Spock, the intent of the test is not to win, but face the fear of failure and the possibility of a tragic outcome. As with the Kobayashi Maru, the trolley problem asserts the prospect of tragic loss of life at the hands of an impassable dilemma. There are lessons to learn in contemplating the trolley problem but is it a realistic method to determine societal readiness for autonomous vehicles?

Ethical Sandboxes

My father once told me the reason for the sandbox in our backyard was to have a place to play that isolated me from the vegetable garden, apparently a favorite place for me to dig holes as a young child. The isolation concept behind a child’s sandbox is also used in software development, where a virtual environment is utilized to isolate the execution of software or programs and allow for independent evaluation, monitoring or testing. Sandboxing has also been used to refine business practices, typically leveraged to create a builder’s space for analysis of new processes and concepts. A conceptual sandbox can easily include all of the tools needed to conduct any conceivable analysis begging the question; can we use moral sandboxes to test, fail and learn successful AV products?

In a study conducted at Osnabrück University, Dr. Lasse Bergmann isolated several popular ethical dilemmas to explore public perception and provide a starting point for further discussion and experimentation of AVs and ethics. Dr. Bergmann posits “Applied ethics is not solely a priori inquiry. Well-reasoned positions need to be developed and intuitions need to adapt to new circumstances.” In testing this approach against the trolley problem Dr. Bergmann’s results were germane to utilitarian and deontological theories and established political norms. Alternative dataset testing however, including allowing more choices such as self-sacrifice, demographic data on potential victims, even alternatives to killing anyone and elicited choices that were more conducive to codification of moral mapping in AV programming.

So, can we use a sandbox to isolate ethics dilemmas in testing environment and cheat our way to a practical solution? A common theme emerges when considering the use of an ethical sandbox to advance AV. The approach can provide an efficient safe-zone for experiments with concepts that present as impassable due to the perceivably large scope of ethics related to AV. A Hakernoon article explores this concept and poses “The biggest challenge the engineering world will face — or rather, is facing — is to incorporate morality and ethical values while both designing an engineered product as well as while engineering a product from scratch.” Clearly there are models and appetite for alternatives to strict ethical frameworks. Advocacy for the creation of sandboxes to allow safe testing of muddy moral and ethical issues within AV is a start but how would it work?.

What’s in the Sandbox?

Ethical issues must be tested and solved to improve the quality of AV performance to viable level. Technology can execute driving features and existing capabilities achieve partially driverless vehicles, save our ability to coexist with AI and the enigmas it presents to our own agency. We are still an ocean away from AI being able to make serious autonomous decisions, much less drive a car. Ethical testing much change if we are to find tangible AI solution that we can live with (pun intended).

Two effective approaches for entrepreneurs facing business challenges are zooming in and/or out of the scope of our focus. This approach can help us increase our scope as we zoom out to observe the needs of a larger than anticipated market or reduce focus (zoom in) to explore the unique needs of a niche market. As we think about AV, a byproduct of AI, we begin to zoom in somewhat to conceptualize a smaller landscape of mobility. Ultimately, we zoom out again to wrestle with the heady issues involved with AI, perhaps becoming lost in its scope. The conclusion offered here is that zooming in, and playing with scenarios and even breaking them, has value in advancing tangible solutions alongside sharpening thought on macro issues. Sandboxing is a viable strategy for AI, AV and many other solutions that seem out of reach. As long as we don’t allow the scope of a sandbox to limit us from zooming in and out, we can find the same “original thinking” that Captain Kirk used to beat his non-win scenario.

Jeff Heinzelman is the founder of MostlyWest with 25+ years of experience in leadership, business process, customer experience and product innovation. I have led teams in many sectors, relying on a personal philosophy of people, process, and technology to deliver innovative products. I am an advocate of customer-focused product management connected to data-driven results. I am also a husband and father of two boys, and live in Austin, Texas where I enjoy Tex-Mex, BBQ, and football. Not necessarily in that order.

--

--

Jeff Heinzelman
The Startup

I am a husband and father of two teenage boys, and live in Austin, Texas where I enjoy Tex-Mex, BBQ, and football. Not necessarily in that order.