The Paperclip Maximizer
The paperclip maximizer is a thought experiment described by Swedish philosopher Nick Bostrom in 2003. It illustrates the existential risk that an artificial general intelligence may pose to human beings when programmed to pursue even seemingly-harmless goals, and the necessity of incorporating machine ethics into artificial intelligence design. The scenario describes an advanced artificial intelligence tasked with manufacturing paperclips. If such a machine were not programmed to value human life, then given enough power its optimized goal would be to turn all matter in the universe, including human beings, into either paperclips or machines which manufacture paperclips
Suppose we have an AI whose only goal is to make as many paper clips as possible. The AI will realize quickly that it would be much better if there were no humans because humans might decide to switch it off. Because if humans do so, there would be fewer paper clips. Also, human bodies contain a lot of atoms that could be made into paper clips. The future that the AI would be trying to gear towards would be one in which there were a lot of paper clips but no humans.
Bostrom has emphasised that he does not believe the paperclip maximiser scenario per se will actually occur; rather, his intention is to illustrate the dangers of creating superintelligent machines without knowing how to safely program them to eliminate existential risk to human beings. The paperclip maximizer example illustrates the broad problem of managing powerful systems that lack human values.
To understand this we have to understand more about Artificial Intelligence.
AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else.
— Eliezer Yudkowsky, Artificial Intelligence as a Positive and Negative Factor in Global Risk
Instrumental convergence suggests that an intelligent agent with apparently harmless goals can act in surprisingly harmful ways.
Humans have a flaw in that we anthropomorphize things that look or act like humans or animals. In other words if we saw a robot that looked like a human or animal we might naturally expect them to have a sense of empathy where there are only machine parts and fixed machine behavior. We might be lured into a false sense of security.
As the line between machines and humans grows smaller we must remember that machines follow their instructions. They have no higher reasoning skills.
Any future AGI, if it is not to destroy us, must have human values as its terminal value (goal). Human values don’t spontaneously emerge in a generic optimization process. A safe AI would therefore have to be programmed explicitly with human values or programmed with the ability (including the goal) of inferring human values.
More reading on the ethics in AI here.