The Myth of a Superhuman AI
Kevin Kelly

One could ask which (kinds of) tasks are significant for solving problems. Simplified this seems to come down to search for existing solutions, test solutions, develop experiments, create environment for being able to perform experiments, perform experiments, search for data, digest data, classify data, develop prediction model (theory), harvest materials, harvest energy (wich is maybe like harvesting materials, but nevertheless), produce tools (like for instance some machine) from materials, analyze data. I hope I’m near complete with this list.

One can imagine all of these tasks having something to do with intelligence (however vague). One can imagine also all of these tasks to be performed by machines. A machine or system of machines which is able to perform all and each of these tasks more efficient than humans can be considered superhuman for the general task (which consists of tasks listed above) of solving problems. If we consider the efficiency of an entity with which it is able to generally solve problems as a measurement for general intelligence of that entity, then one can consider an artificial entity which is able to generally solve problems far more efficient than any human to have an artificial general intelligence which exceeds human general intelligence. For that matter one can also compare the performance in that respect of a team of artificial entities with a team of an equal number of humans (considered to be the top of human general problem solving) as there are artificial entities in the other team.

Defined in such a way (although I reckon with the possibility it may possibly be too simple) its not unthinkable superhuman artificial intelligence will ever occur.

One clap, two clap, three clap, forty?

By clapping more or less, you can signal to us which stories really stand out.