One of my Calgary law students is writing about the ethics of autonomous vehicles, which, these days, center around the Trolley Problem. As Lauren Davis wrote in an Atlantic Article, the Trolley Problem is an iconic philosophical thought experiment about morality and ethics. It states a hypothetical dilemma where a runaway streetcar is hurtling towards five unsuspecting workers. Do you pull a switch to divert the trolley onto another track, where only one man works alone? Or do you do nothing?
Philosophers have mixed opinions about the value of thought experiments like the Trolley Problem. Some find them useful hypotheticals to think through abstract questions because the particularities expose weaknesses and traps of generalities. Others find them silly games that fall short of explaining how we act or would act in real life (people often laugh when they first encounter the Trolley Problem).
What interests me, and prompts me to write this post, is whether AI might change the value of thought experiments like the Trolley Problem. Developing self-driving cars enables — and forces — us to actually make and encode the moral decisions we would choose to make in an hypothetical, rational realm. It forces us to contemplate possible scenarios, decide what we think we would and should do in advance, and then code systems that go on to execute these actions in the real world. This deferred period raises the value of the thought experiment from simulation to reality.
Reality rife with consequences and liability. Tragically, we need feedback on how people respond to an accident generated by an algorithm executing our moral judgments to gauge their rectitude.