TLDR: How to regulate Artificial Intelligence

By Oren Etzioni (these highlights provided for you by Annotote)

Anthony Bardaro
Annotote TLDR
2 min readSep 20, 2017

--

It’s natural to ask whether we should develop A.I. at all. I believe the answer is yes … The problem is that if we [heavily regulate AI], then nations like China will overtake us. The A.I. horse has left the barn, and our best bet is to attempt to steer it. A.I. should not be weaponized, and any A.I. must have an impregnable “off switch.” Beyond that, we should regulate the tangible impact of A.I. systems

I propose three rules for artificial intelligence systems that are inspired by, yet develop further, the “three laws of robotics” that the writer Isaac Asimov introduced in 1942:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm;
  2. a robot must obey the orders given it by human beings, except when such orders would conflict with the previous law; and
  3. a robot must protect its own existence as long as such protection does not conflict with the previous two laws.
Annotote, the network where one man’s annotation is another man’s summarization.

an A.I. system must be subject to the full gamut of laws that apply to its human operator.

Simply put, ‘My A.I. did it’ should not excuse illegal behavior.

an A.I. system must clearly disclose that it is not human.

an A.I. system cannot retain or disclose confidential information without explicit approval from the source of that information

This summary is provided courtesy of Annotote, a network that’s the most frictionless transmission mechanism for your daily dose of knowledge. Have a minute? Get informed. All signal/no noise is only a click away: Try Annotote today!

--

--

Anthony Bardaro
Annotote TLDR

“Perfection is achieved not when there is nothing more to add, but when there is nothing left to take away...” 👉 http://annotote.launchrock.com #NIA #DYODD