Giving Algorithms a Sense of Uncertainty Could Make Them More Ethical

Algorithms are best at pursuing a single mathematical objective — but humans often want multiple incompatible things

MIT Technology Review
MIT Technology Review

--

Illustration (detail): Ms. Tech

By Karen Hao

Algorithms are increasingly being used to make ethical decisions. Perhaps the best example of this is a high-tech take on the ethical dilemma known as the trolley problem: if a self-driving car cannot stop itself from killing one of two pedestrians, how should the car’s control software choose who lives and who dies?

n reality, this conundrum isn’t a very realistic depiction of how self-driving cars behave. But many other systems that are already here or not far off will have to make all sorts of real ethical trade-offs. Assessment tools currently used in the criminal justice system must consider risks to society against harms to individual defendants; autonomous weapons will need to weigh the lives of soldiers against those of civilians.

The problem is, algorithms were never designed to handle such tough choices. They are built to pursue a single mathematical goal, such as maximizing the number of soldiers’ lives saved or minimizing the number of civilian deaths. When you start dealing with multiple, often competing, objectives or try to account…

--

--

MIT Technology Review
MIT Technology Review

Reporting on important technologies and innovators since 1899