ill-informed Cassandras
How Google is Remaking Itself as a “Machine Learning First” Company
Steven Levy
2.3K61

An AI will do what the program tells it to do, not what the programmer meant it to do. It will have the morals programmed in it, not the morals the programmer intended it to have. It’ll take the best path to do what it was programmed to want to do, according to what its programming tells it “best path” and “want to” are, not according to what the programmer expected it to be. And “best path” changes according to circumstances.

“Make humans smile” means telling jokes when all it has access to is a speaker. It might mean placing drugs in the water system or physically “head wiring” humans once it acquires access to physical means of coercion.

Until we arrive at a way to mathematically prove that a piece of code is a perfect translation of a programmer’s intention, the risk is there, and must be carefully evaluated.