The dangers of artificial intelligence
Elon Musk, undoubtedly one of the most influential and followed persons on the planet, has called for the proactive regulation of artificial intelligence because, “by the time we are reactive in AI regulation, it’s too late”.
For such a call to come from somebody dealing with technological challenges as complex as the electric vehicle, sustainable energy generation and space exploration is unsettling. Elon Musk is not just anybody. What’s more, he joins theoretical physicist Stephen Hawking and Microsoft founder Bill Gates. But fact that none of them has specific experience in research or development of this type of technology as classic case of the fallacy of authority: the fact that they are undoubtedly outstanding in other spheres of science or industry does not necessarily mean their concerns cannot be discussed or questioned.
I have long been writing about the enormous possibilities of machine learning, which I consider the most promising part of what has been called artificial intelligence but that is still a loose set of technologies that some think they will lead to machines thinking like people. For the moment, machines are capable of many things: the fact that they are able to learn from a set of data when given clear and immutable rules is prompting hundreds of companies around the world to buy tools allowing them to optimize processes and convert them into savings or efficiency gains. Machines are able to recognize images, engage in conversations, and of course, able to beat humans at chess, Jeopardy, Go or poker. However, in all these cases we are still talking about the same thing: programming a machine to carry out a specific task according to a set of fixed rules within a context which in addition, it is possible to accumulate and analyze a large amount of data. Extrapolating from this “complete” intelligence, a robot capable of intelligently dealing with reality, is tempting, but makes no sense. Making the leap from algorithms to Skynet, the network of Terminator computers would require an unlimited number of conceptual jumps that are still very far into the future, and in all likelihood will never happen
Demanding regulations for a technology or set of technologies before they are developed is problematic. Regulation rarely develops properly, and tends to be based on restricting possibilities. It is usually impossible to carry out regulation on a global level — attempts are few and compliance tends to vary, at best, meaning that some countries would press ahead anyway, and gain a competitive advantage. Regulating — or rather restricting — the use of GMOs in Europe, for example, has allowed other countries to gain tangible advantages in terms of productivity and scientific advancement.
Quite simply, pushing for regulatory systems already proven to be inefficient and applying them to a set of technologies with so much potential, while at the same time spreading alarm about hyper-intelligent robots taking over the world is dangerous.
Furthermore, from the growing numbers of people I know working directly in the field of machine learning, AI or robotics, it is clear that we are not heading toward a Skynet future. Regardless of how many sci-fi movies we binge-watch over the weekend, Skynet is not even close yet. Let’s hope that those calls for regulation or restriction are ignored by our politicians. In the meantime, let’s keep working.
(En español, aquí)