Building a Safer AI Future

Lessons from AI Governance Day

StartingUpGood
StartingUpGood Magazine
5 min readMay 29, 2024

--

The AI for Good Global Summit 2024 kicked off on May 29th with its first-ever AI Governance Day. In-person attendees were able to participate in morning conversations and workshops, which were followed by live-streamed sessions.

ITU Secretary-General Doreen Bogdan-Martin welcomed the afternoon audience with the three key elements she believes are necessary for effective AI governance efforts:

  1. Technical standards development for effective guardrails and interoperability
  2. Putting human rights at the core of AI development
  3. Inclusive development through capacity building

Here are other key takeaways from a few of our favorite panels today:

Our team at StartingUpGood and SDGCounting is ready to bring you live and post-event coverage. Follow along for more insights.

2024 AI for Good Global Summit YouTube Livestream

The critical conversation on AI safety and risk

This panel discussion brought together experts to critically examine the paramount need for prioritizing AI safety.

Panelists

Moderator

Key Takeaways

The accelerating capabilities of artificial intelligence have brought a critical need to prioritize safety measures into sharper focus. The overarching consensus among these panelists was that a multi-pronged approach involving systematic risk evaluation frameworks, human values alignment, and lessons from other risk-laden industries must be adopted for responsible AI development and deployment.

Stuart Russell, UC Berkeley professor and AI safety pioneer, advocated learning from industries like aviation, pharmaceuticals and nuclear power which have stringent safety certification processes acting as “prerequisites for the ability to deploy systems in the real world.” Lane Dilg from OpenAI highlighted their endeavors at balancing innovation with dedicated safety research across present and future AI models.

However, systematic risk identification remains an inexact science according to Rumman Chowdhury of Humane Intelligence. She stressed following evidence-based risk management frameworks while empowering civil society through “bias bounty programs” to crowdsource identification of AI risks and negative use cases.

Aligning AI with human values also emerged as a critical imperative. Hakim Hacid from UAE’s Technology Innovation Institute talked about how transparency into the system’s inner workings is essential to enable continuous human control and verification over advanced AI capabilities. “We have to have mechanisms where we are able to continuously control and verify what is happening inside.”

While the path forward remains challenging, the panelists embodied the ethos summed up by Russell — “I’m cautiously optimistic. But it does feel as if we’re in a race that we shouldn’t have to be in between when we figure out how to control AI systems, and when we figure out how to produce AGI.”

To share or not to share: the dilemma of open source vs. proprietary Large Language Models

Panelists explored the complex landscape of open sourcing AI models, weighing the benefits, challenges, and future implications for the field.

Panelists

Moderator

Key Takeaways

The panelists, representing key players in the AI ecosystem, agreed that open source has been a fundamental driver of innovation and progress in AI and machine learning. As Jim Zemlin from the Linux Foundation pointed out, “Open source has been a free fundamental building block for all modern technology systems.”

Chris Albon from the Wikimedia Foundation emphasized that the openly accessible, high-quality data on Wikipedia, created by a dedicated community of contributors, has been instrumental in training large language models. “Wikipedia is one of the best things the internet ever created — a huge pool of information created by humans, millions of hours of human time that is freely available to anybody. And the reason that it is so popular as a training data source for large language models is because it is high quality data created by real people, that is Wikipedia.”

However, the decision to open source LLMs is not a binary choice, but rather a spectrum that requires careful consideration of risks and benefits. Melinda Claybaugh from Meta emphasized, “I think we do ourselves a disservice if we think of things as open versus closed, there’s actually a real spectrum. And I think we can get distracted by the debate of what is open or closed. But what’s important is the nuance.”

The panelists advocated for a responsible approach to open sourcing, which may involve releasing model weights but not necessarily training data, along with clear usage guidelines and safeguards. As Melike Yetken Krilla from Google noted, “Open source does not mean no safeguards. And so for us when we’re looking at openness, we’re thinking about is it gradually open, fully open API access, partially open, there’s a lot of different gradients that you can have.”

Looking to the future, the panelists acknowledged that while open source remains a crucial tool for transparency, competitiveness, and safety, it may not be the sole solution for high-capability models deemed too risky for full public release. Isabella Hampton from the Future of Life Institute suggested that “open source is a means to an end and not the end itself. Open source is a tool that we can leverage to accomplish our goals.”

Collaboration between industry, academia, and government will be essential in developing standards and governance frameworks for responsible LLM development and sharing. Initiatives like the National AI Research Resource can provide access to compute resources and data for safety research, fostering a more inclusive and sustainable AI ecosystem.

Pathways forward: day zero wrap up

Tomas Lamanauskas of the International Telecommunication Union (ITU) concluded AI Governance Day with this summary of clear themes that had emerged.

  1. Responsible frameworks matter. We need to tie AI very closely to ethics and human rights.
  2. People want interoperability among technology platforms so smaller providers and players can also participate. It’s not always possible to have a one regulatory approach, but the approaches need to be interoperable.
  3. We need technical standards to keep AI working for the good of humanity. Leverage AI to bridge digital divides and ensure we don’t create new ones.
  4. Global solidarity and resource sharing, like high performance computing or other resources, can help us achieve our AI goals of managing risks while leaving no one behind.

Our StartingUpGood team believes that events and conferences are great places to learn, share ideas, and innovate. We are committed to using our innovative tech tools to share key insights and learnings from top conferences. This article uses Otter.ai to create transcripts and various LLMs to help identify key takeaways. All content for the article was hand-curated and checked for quality.

StartingUpGood supports fresh entrepreneurial approaches to social impact. FOLLOW US on social media:

Check out SDGCounting for the latest news on tracking the progress of the Sustainable Development Goals. #SDGs #GlobalGoals

--

--

StartingUpGood
StartingUpGood Magazine

Supporting fresh entrepreneurial approaches to do good in the world. Check out our magazine: https://medium.com/startingupgood