A “pause” on the development of AI? It’s not going to happen
--
More than 1,000 technology experts, researchers and investors have signed an open letter expressly asking for a six-month halt to the creation of “giant AI systems” citing “profound risks to society.”
But anybody who works in technology will already know that this request cannot be met. It’s a fantasy to imagine that a government or some kind of authority can impose a “ban experiments on AI”. Technology cannot be stopped: whoever discovered fire didn’t ask anyone’s permission, and once its potential was clear, it was never going to be brought under control. When electricity was discovered, there would have been no point in banning it: there would always have been someone, somewhere, would have harnessed its power in order to make money.
Machine learning is just the latest in a long line of technologies that has many people worried: starting with the use of an inappropriate term, artificial intelligence, some believe that we are dealing with a technology that we cannot control. This position is absurd, as is believing that machine learning has the potential for “general intelligence”. This kind of daydreaming even affects people working directly in the field, such as the recent case of Blake Lemoine and his unhealthy obsession with the supposed self-awareness of the algorithm he was working on.
Let’s be clear: LaMDA, GPT and all the other algorithms out there possess self-awareness, and although we tend to anthropomorphism and love to attribute human qualities to animals and technology, in the same way we’re prone to pareidolia, this is just stuff that happens in our brains, and is not real. At the end of November last year, a company that had taken the training models of some algorithms to a new level using large language models — the first version of ChatGPT was trained on a supercomputer made with some 10,000 Nvidia graphics cards — then decided to open its conversational model to any user in order to exponentially increase its input. Because this is conversational model that anybody can use, it wasn’t long before some people began attributing human qualities to it; when it “hallucinates” and gets things badly wrong, they interpret this as the machine rebelling or a proof of some hidden consciousness.