Killer Robots and the Moral Dilemma of Automation (2 of 2)

How will we teach robots to understand our values? Maybe by reading them stories.

Jacob Ward
8 min readAug 31, 2017
A robot waiter in Chengdu, China (Photo: Reuters/Stringer)

The series “Guidance Systems” discusses technologies that seem to improve our lives by offering us new choices, while in fact shaping or removing our ability to decide things for ourselves.

The automation of fundamental human tasks — cooking, driving, killing—is well underway. The US military is aggressively pursuing automation in almost every part of its operations, from the Air Force’s Loyal Wingman program, which seeks to integrate human pilots with autonomous aircraft, to the Office of Naval Research’s Science of Autonomy program, which pursues, among other goals, reduced “manning and other communications requirements.”

The difficulty is that automating these tasks isn’t just a question of programming a set of instructions. It’s also a question of programming a set of values. It’s clear that with enough time and money we can do the former. It’s not clear that we know how to do the latter. And at the moment the United States is not a signatory to any sort of treaty governing the use of automated weapons.

That doesn’t mean the ethics of these systems aren’t being discussed. As early as 2008, an ONR-funded paper from CalPoly about the…

--

--

Jacob Ward

Technology correspondent for NBC News. Berggruen Fellow at Stanford’s CASBS program. Former editor-in-chief of Popular Science. http://www.jacobward.com