Why can’t some people accept that intelligence is not exclusive to humankind?
We’ve spent decades spent teaching people that computers are automatons, machines that can only execute the commands programmers introduced, and that the consequence of A was always B. When it came to accounting or preparing payrolls, we didn’t want the computer to be creative at all, we just wanted it to perform a particular routine task, without making any errors and sticking to what the program established. For most people, a computer is still that, a machine that does exactly what the user asks in a perfectly predictable sequence … and if it is not do that, if it does something else, then there must be a bug, an error. The idea that a computer can make decisions based on experience, can estimate different possibilities or optimize a process based on another it has discovered at random is not only intimidating, but downright unacceptable and dangerous.
There are still many people for whom the fact that a GPS program, powered by the data of thousands of other people, can work out a route better than they can is unthinkable, and prefer to go their own way, saying: “what does this gadget know?”But let’s go one step further: the idea that a computer in the cloud is able to consolidate the traffic of all users using a particular GPS application, and decides to suggest half of them to go by one route and others by another, in order to achieve an optimal solution, is completely unacceptable. Who is that app to tell me where I should drive or deprive me of freedom to do what I want?
I despair of the inability of many people to accept that a machine can develop a superior level of intelligence to humans when all they need to do is to read the account of the second game of Alpha Go against Lee Sedol, in which a machine carried out not only impressive or “beautiful” moves from a human point of view, but more importantly, ones no human could ever have imagined and that had never been played before.
For many people, the idea of intelligence is, in some quasi-religious way, inextricably linked to human nature. Accepting that a machine can manage existing knowledge and experience in an infinitely more efficient and precise way than humans, and is also able to build on that knowledge by iterating and exploring alternatives in an unsupervised way, is anathema. They are simply unable to accept that a machine can do more than mechanically repeat commands programmed by a person, when the reality is that the machines are already much more advanced and can do more and more things that could once only be done by human intelligence. This is simply an inability to understand the concept of machine learning and artificial intelligence, an almost metaphysical impossibility to accept that we have been able to unravel the algorithms that humans perform to learn and reconstruct them through a machine.
In reality, the problem comes from putting mankind at the center of creation. In practice, and reduced to a biological level, we are simple biochemical algorithms, and our cognitive abilities are easily copied: how we remember, how we learn, how we make inferences, how we deduce, how we solve problems. In many of these processes, in fact, machines already clearly exceed human capabilities. What will happen to the labor market when artificial intelligence achieves better results than humans in most cognitive tasks? What will happen when the algorithms are better than us at remembering, analyzing and recognizing patterns? The idea that humans will always have a unique capacity beyond the reach of non-conscious algorithms is a vain illusion and not based on any kind of serious consideration.
To prepare for the future, it is essential to understand it. And basing that preparation on religious dogmas or ignorance of the processes that allow machines to learn and develop intelligence is not the best way to go about it.
(En español, aquí)