Zach Musgrave (Google) and I argued in an Atlantic article that there is a class of machine that is not currently the focus of the AI weapons debate, but which we believe is the most important weapon to control and ban. Let me refer to this as the “Efficiency Metric Test”.
The Efficiency Metric Test. An Artificial Intelligence weapon should be banned if either one of the following two conditions are met: (1) it is a highly efficient mass killing device; or (2) it is very “close” to being a highly efficient mass killing device, in that it can be converted into one through a series of widely accessible steps on a short timescale.
Would you like to ask a question on Twitter? My handle is @SoulPhysics. — — Bryan W. Roberts
Q1. Does this mean we should ban drones?
Q2. Does this mean we should ban normal commercial Artificial Intelligence?
Q3. Are military missile systems “close” enough that we should ban them?
Postscript: Let me emphasise that I completely agree with Michael on this last point as well. This is one of the reasons Zach Musgrave and I wanted to write this article: although the AI weapons debate has given a great deal of attention to expensive military-style Artificial Intelligence, we think not enough attention has been paid to AI that is relatively “close” to being a highly efficient mass killing device — i.e. devices that are modifiable through inexpensive, widely accessible means so as to become one.
For a nice argument that current AI debate has not given us good reason to roll back existing missile systems like fire and forget and Phalanx, see the Horowitz and Scharre article in the New York Times.
Q4. Is AI still too poor at complex tasks to be an imminent concern?
Postscript. AI is better than humans at many tasks, though not all. Chess AI and AI-augmented human players arenow better than humans players, but Go AI struggles to keep up. AI is better at driving cars, but not at holding a conversation.