Why the 6-month AI Pause is a Bad Idea

Koza Kurumlu
3 min readApr 8, 2023

--

The rapid development of artificial intelligence (AI) technology, particularly with systems like GPT-4 developed by OpenAI, has been a topic of concern among various experts and industry leaders. GPT-4, which boasts human-like conversation abilities, song composition, and document summarization, has spurred a call for a six-month pause on AI development due to the perceived risks it poses to humanity.

Elon Musk, Gary Marcus, Steve Wozniak, and over 1,800 signatories, including engineers from Amazon, DeepMind, Google, Meta, and Microsoft, have lent their support to this idea.

Despite the concerns raised, halting AI development for six months might not be the best solution. Let’s dive deeper into the reasons why a pause may not be as effective as it seems.

This topic was recently covered by Yann LeCun and Andrew Ng on the DeepLearningAI YouTube channel:

This blog includes a summary of some of their arguments and an extra few as well.

  1. AI can be part of the solution: There’s no denying that AI has its share of challenges. However, it’s essential to remember that AI also has the potential to address issues faced by various industries, such as social media, content moderation, and combating online hate speech. By integrating AI systems responsibly and ethically, we can create a safer and more efficient digital environment. Pausing AI development may hinder progress in these areas and slow down the deployment of beneficial AI-driven solutions.
  2. Misrepresentation of expert opinions: The letter calling for the AI pause cited 12 pieces of research from experts in the field, including university academics and employees of OpenAI, Google, and DeepMind. However, four of the experts expressed concern that their research was misrepresented to make such claims. It’s crucial to base decisions on accurate information, and if the call for a pause misrepresents expert opinions, the argument loses credibility, casting doubt on the validity of the proposed pause.
  3. Questionable verification process: When the letter was initially launched, it lacked proper verification protocols for signatures. As a result, some people who didn’t actually sign the letter, like Xi Jinping and Meta’s chief AI scientist Yann LeCun, were listed as signatories. This oversight raises questions about the credibility of the call for a pause and whether it genuinely represents a consensus among AI experts and professionals.
  4. Focus on long-term risks over immediate concerns: Critics argue that the Future of Life Institute (FLI), which coordinated the letter and has received funding from the Musk foundation, is prioritizing apocalyptic scenarios over more pressing concerns like AI biases. Addressing present-day issues such as racist or sexist biases in AI systems is essential for responsible AI development. A pause may not address these immediate concerns and could even divert attention from them.
  5. Lack of clarity on the scope of the pause: The letter doesn’t provide a clear definition of what constitutes a system “more powerful than GPT-4.” This vagueness leaves room for confusion and misinterpretation of the pause’s intended goals and scope. A well-defined and targeted approach is necessary to address potential risks associated with AI development effectively.
  6. Ignoring non-existential risks: Some experts argue that even without reaching human-level intelligence, AI can exacerbate risks related to climate change, nuclear war, and other existential threats. It’s essential to address these issues without halting AI development. Pausing AI research could delay progress in areas where AI can be used to mitigate risks and create a better future for humanity.

Therefore, a six-month pause might not be the most effective way to address them.

Instead of halting development, we should focus on fostering responsible and ethical AI research, addressing immediate concerns like biases, and promoting transparency and collaboration within the AI community.

By doing so, we can work together to ensure AI benefits society while mitigating potential risks.

By engaging in open discussions and sharing knowledge, researchers and developers can better understand the implications of AI systems and find ways to address concerns collectively. Ultimately, a more thoughtful and targeted approach, rather than an outright pause, will lead to the responsible growth of AI technology and its positive impact on our world.

--

--

Koza Kurumlu

Student at Eton College, UK | Writing about Physics, CS & AI - also book summaries