Elon Musk, Steve Wozniak, Yuval Noah Harari, Andrew Yang, and More Tech Leaders Join The Future of Life Institute To Call On All AI Labs to Pause Giant AI Experiments To Jointly Develop and Implement A Set of Shared Safety Protocols for Advanced AI Design and Development

Tina Hui
The Gage
Published in
6 min readMar 29, 2023
Read/Sign/Share the Open Letter: Pause Giant AI Experiments Open Letter, Future of Life Institute

Tech leaders Yoshua Bengio, Stuart Russell, Gary Marcus, Elon Musk, Steve Wozniak, Yuval Noah Harari, Andrew Yang, and over 1000 more innovators and technologists have joined The Future of Life Institute in signing an open letter to all Artificial Intelligence labs for a 6-month pause of Giant AI Experiments in order for review, oversight, and best practices to be outlined and explored before continued potentially dangerous unchecked innovation at a critical point in AI advancement collaboratively so that “Humanity can enjoy a flourishing future with AI.”

The Future of Life Institute is calling on all artificial intelligence labs to temporarily and immediately pause training powerful models, defined in their terms as “the training of AI systems more powerful than Chat GPT-4” in a timely act of necessary collaborative responsible leadership in the weeks following the global overnight enthusiasm which set the record for fastest growing user base ever reaching “100 million monthly active users in January, just two months after launch, making it the fastest growing consumer application in history, according to a UBS study” as of February 2, 2023, with “an average of about 13 million unique visitors” per day in January, kicking off what The New York Times is calling a global AI arms race, and accompanying concerns worldwide following Open AI’s launches and releases surrounding Chat GTP-4.

Unfortunately, as The Carnegie Endowment For International Peace summarizes the situation best, “AI is really challenging to define.” Policymakers around the world have attempted to create guidance and regulation for AI’s use in settings for years without consensus, ranging from school admissions and home loan approvals to military weapon targeting systems. The biggest problem in regulating AI is agreeing on a definition and it will surely not be an easy feat since “subtle differences in wording can have major impacts on some of the most important problems facing policymakers and even a New York City Council’s task force in 2017’s attempts to address the city’s growing use of artificial intelligence ran aground on the scope of “automated decision systems.”

As NPR noted today, “a number of governments are already working to regulate high-risk AI tools. The United Kingdom released a paper Wednesday outlining its approach, which it said ‘will avoid heavy-handed legislation which could stifle innovation.” Lawmakers in the 27-nation European Union have been negotiating passage of sweeping AI rules for quite some time. What is The EU AI Act? “The AI Act is a proposed European law on artificial intelligence (AI) — the first law on AI by a major regulator anywhere. The law assigns applications of AI to three risk categories.

  • First, applications and systems that create an unacceptable risk, such as government-run social scoring of the type used in China, are banned.
  • Second, high-risk applications, such as a CV-scanning tool that ranks job applicants, are subject to specific legal requirements.
  • Lastly, applications not explicitly banned or listed as high-risk are largely left unregulated.”

OpenAI’s CEO CTO Sam Altman also shared, very recently, that their team is proceeding with care as much as they can. In the interview below with ABC News’ Rebecca Jarvis Altman shares their teams do audits, meet with policymakers, and have discussions around AI ethics and how to proceed often and are being very careful in defining Chat GTP’s constraints. Altman also shared similar concerns about the power of AI, the potential for things to go awry if left without oversight, audits, policy, and constraints, and shared a compelling vision he has where society helps AI stay safe with “representatives from major world governments trusted international institutions coming together and writing a governing document [that defines things like] here's what the system should do and here’s what the system shouldn’t do, here are very dangerous systems the system shouldn’t even touch even in a mode where it’s creatively exploring, and then developers of language models…can use that as a governing document” in his interview with ABC.

It is unclear if AI labs will observe a respectful pause, and how things will pan out in regard to the conflicts of interests between Sam Altman and Elon Musk, but hopefully, with the increasing global concern around AI ethics and safety, perhaps these efforts behind the open letter at the very least will hopefully be a validator that thousands of trustworthy innovators care and do want to meet and respectfully come together to jointly develop and implement a set of shared safety protocols for advanced AI design and development for the greater good and do help efforts progress for everyone’s bests interests.

OpenAI CEO, CTO Sam Altman on risks and how AI will reshape society tells ABC News’ Rebecca Jarvis that AI will reshape society and acknowledges the risks: “I think people should be happy that we are a little bit scared of this.” March 17, 2023

KEY POINTS FROM THE FUTURE OF LIFE INSTITUTES OPEN LETTER:
“AI systems with human-competitive intelligence can pose profound risks to society and humanity, as shown by extensive research[1] and acknowledged by top AI labs.[2] As stated in the widely-endorsed Asilomar AI Principles, Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources. Unfortunately, this level of planning and management is not happening, even though recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one — not even their creators — can understand, predict, or reliably control.”

“Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable. This confidence must be well justified and increase with the magnitude of a system’s potential effects. OpenAI’s recent statement regarding artificial general intelligence, states that ‘At some point, it may be important to get independent review before starting to train future systems, and for the most advanced efforts to agree to limit the rate of growth of compute used for creating new models.’ We agree. That point is now.

Therefore, we call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4. This pause should be public and verifiable, and include all key actors. If such a pause cannot be enacted quickly, governments should step in and institute a moratorium.

AI labs and independent experts should use this pause to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts. These protocols should ensure that systems adhering to them are safe beyond a reasonable doubt.[4] This does not mean a pause on AI development in general, merely a stepping back from the dangerous race to ever-larger unpredictable black-box models with emergent capabilities.

AI research and development should be refocused on making today’s powerful, state-of-the-art systems more accurate, safe, interpretable, transparent, robust, aligned, trustworthy, and loyal.

In parallel, AI developers must work with policymakers to dramatically accelerate development of robust AI governance systems. These should at a minimum include: new and capable regulatory authorities dedicated to AI; oversight and tracking of highly capable AI systems and large pools of computational capability; provenance and watermarking systems to help distinguish real from synthetic and to track model leaks; a robust auditing and certification ecosystem; liability for AI-caused harm; robust public funding for technical AI safety research; and well-resourced institutions for coping with the dramatic economic and political disruptions (especially to democracy) that AI will cause.

Humanity can enjoy a flourishing future with AI…“

The Future of Life Institute’s LinkedIn Post and Invitation For Support Accompanying Their Open Letter

Read the letter, learn about The Future of Life Institute (FLI), add your name, share it with your friends, family, and colleagues if you agree with it, and More:

FUTURE OF LIFE INSTITUTE
OPEN LETTER
Pause Giant AI Experiments

--

--

Tina Hui
The Gage

CEO & Founder of The Gage, ED AAMA SV and Impact Collaborator. Successes: Follow The Coin, Warner Brothers, Snapfish by HP, One Medical Group. Lover of life.