Elon Musk, Wozniak, Others Push Labs to ‘Pause’ Training of AI Systems

PCMag
PC Magazine
Published in
3 min readMar 30, 2023
(Photo illustration by Jakub Porzycki/NurPhoto via Getty Image)

‘Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us?’ says an open letter signed by numerous scientists and tech entrepreneurs.

By Michael Kan

Does the race to develop more powerful AI such as ChatGPT pose a risk to human civilization? A group of tech entrepreneurs and scientists, including Elon Musk and Steve Wozniak, think so.

Musk and Wozniak have publicly signed an open letter urging the tech industry to hit the brakes on AI development, citing the disruptive consequences the technology could have on society.

“Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable,” says the letter, which was also signed by Pinterest co-founder Evan Sharp, former US Presidential candidate Andrew Wang, and acclaimed writer Yuval Noah Harari.

The letter calls for AI labs “to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4,” OpenAI’s latest large language model can generate human-like responses and complete tasks, such as article writing and computer coding, in seconds.

(Gabby Jones/Bloomberg via Getty Images)

GPT-4 already powers OpenAI’s ChatGPT, a program that’s taken the world by storm for its ability to streamline and even automate some forms of white-collar work. It has also raised concerns that existing and future AI programs will shake up society by taking jobs away from humans and making it easy for bad actors to pump out disinformation on the internet.

The open letter says it’s time for society to weigh and rein in the potential consequences before plunging further into the AI race. “Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones?” the letter asks. “Should we risk loss of control of our civilization? Such decisions must not be delegated to unelected tech leaders”

The letter goes on to urge AI labs to not only temporarily stop their research but to make the pause “public and verifiable.”

“If such a pause cannot be enacted quickly, governments should step in and institute a moratorium,” the document adds.

During the pause, AI labs and independent experts should come together to form safety protocols around AI development. “These protocols should ensure that systems adhering to them are safe beyond a reasonable doubt. This does not mean a pause on AI development in general, merely a stepping back from the dangerous race to ever-larger unpredictable black-box models with emergent capabilities,” the letter adds.

The open letter is hosted by the Future of Life Institute, which is devoted to ensuring technologies, such as AI, can be peacefully developed without causing societal havoc. The letter, which anyone can sign, has so far attracted 1,123 signatures. Several engineers at Microsoft and Google have signed it.

OpenAI, Microsoft, and Google, which is developing its Bard AI, didn’t immediately respond to a request for comment. But all three companies say they plan on developing AI responsibly.

In the meantime, a few critics are arguing the open letter exaggerates some of the risks posed by AI technologies while failing to address more realistic scenarios. “This open letter—ironically but unsurprisingly—further fuels AI hype and makes it harder to tackle real, already occurring AI harms. I suspect that it will benefit the companies that it is supposed to regulate, and not society,” tweeted Arvind Narayanan, a Princeton computer science professor.

“Of course there will be effects on labor and we should plan for that, but the idea that LLMs (large language models) will soon replace professionals is nonsense,” he added.

Originally published at https://www.pcmag.com.

--

--