When it Comes to AI and Weapons, the Tech World Needs Philosophers

If a company says its technology doesn’t cause injury, how is that defined?

Washington Post
The Washington Post

--

Illustration: kargoman/Getty Images

By Ryan Jenkins

Silicon Valley continues to wrestle with the moral implications of its inventions — often blindsided by the public reaction to them: Google was recently criticized for its work on “Project Maven,” a Pentagon effort to develop artificial intelligence, to be used in military drones, with the ability to make distinctions between different objects captured in drone surveillance footage. The company could have foreseen that a potential end use of this technology would be fully autonomous weapons — so-called “killer robots” — which various scholars, AI pioneers and many of its own employees vocally oppose. Under pressure — including an admonition that the project runs afoul of its former corporate motto, “Don’t Be Evil” — Google said it wouldn’t renew the Project Maven contract when it expires next year.

To quell the controversy surrounding the issue, Google last week announced a set of ethical guidelines meant to steer its development of AI. Among its principles: the company won’t “design or deploy AI” for “weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people.” That’s a…

--

--

Washington Post
The Washington Post

News and analysis from around the world. Founded in 1877.