Responsible AI: Executive Roundtable Recap

Key takeaways from Slalom’s recent executive roundtable discussion on the top risks and strategies for the responsible use of AI

Anirudh Gupta
Slalom Data & AI
4 min readAug 24, 2023

--

Photo by Mapbox on Unsplash

How do we develop and deploy artificial intelligence (AI), generative AI (GenAI), and large language models (LLMs) responsibly? It’s a big question—one we discussed recently with 15 Northern California industry, academia, and government executives at a roundtable on responsible AI.

From the importance of data quality to working with government organizations to shape policy and regulations, balancing innovation with risk was an overarching theme across the various challenges and opportunities discussed.

Read on for details of the top concerns and risks identified, as well as potential strategies and solutions we can all consider for moving forward responsibly in an AI-enabled world.

Top concerns and risks

1. Lack of transparency and explainability

The inner workings of AI models are so complex that they become opaque, making it challenging to understand their decision-making processes. Lack of transparency and model explainability could hinder accountability and raise trust issues.

2. Equity risk

AI systems can be trained on data that is biased, which can lead them to make biased decisions. GenAI could be used to perpetuate bias and discrimination, and to amplify misinformation and disinformation. This can significantly affect people’s lives, especially those who are already marginalized.

3. Liability risk

There is a risk that GenAI could be used to create harmful and misleading content, and that the companies developing these models could be held liable for resulting damages.

4. Regulatory risk

Lack of clear regulations governing the development and use of GenAI could lead to companies being caught off guard by new laws or regulations.

5. Security risk

GenAI could be used to create malicious content, such as deepfakes or spam, which could damage companies’ reputations or compromise their security. AI systems can be used to launch cyberattacks, which can have a significant impact on businesses and individuals.

6. Governance and accountability

There is a substantial risk when engaging with third-party GenAI companies, integrating enterprise data and technology solutions without clear lines of accountability and oversight. Developers and organizations should take responsibility for the AI systems they create and deploy to address any issues or harm caused by AI applications.

7. Unemployment and economic disruption

AI systems can be used to automate tasks that are currently performed by humans, which could lead to job losses. This is a particular concern for workers in low-wage jobs, potentially causing economic disruption and social upheaval.

8. Unintended consequences

GenAI is still in its early stages of development, and it is not yet clear what the full range of its potential consequences will be.

Top recommendations and strategies

In addition to discussing the real and potential risks of generative AI, the group also came up with best practices for responsible AI.

1. Adopt a risk management framework

A risk management framework can help you identify, assess, and mitigate the risks associated with AI, such as bias, misinformation, and security breaches.

2. Educate your stakeholders

It is important to educate your stakeholders so that they understand the benefits and risks of using AI. This education should include how to use GenAI responsibly and how to spot and report potential risks.

3. Develop responsible use guidelines, policies, and procedures

Establish policies for using GenAI and LLMs that address issues such as data governance, privacy, and security.

4. Use explainability techniques

Explainability techniques help users understand how AI models make decisions, which can build trust in these models.

5. Focus on data quality

The quality of the data used to train AI models is essential to the accuracy and reliability of those models. Make sure that your data is clean, accurate, and complete. Prioritize building a simple model using better data over a sophisticated model using incorrect or irrelevant data.

6. Develop technical solutions

Explore and develop technical solutions that can be used to mitigate the risks of GenAI and LLMs, such as digital watermarking and verification techniques.

7. Invest in research and development in AI

The field of AI is constantly evolving, so it is important to invest in research and development to stay ahead of the curve.

8. Evolve organization culture and skills

Building and deploying AI systems require the right incentives, specialized skills, and expertise. The cost of inaction is much higher when leaders don’t have the right incentives to find new opportunities and fix existing problems. Additionally, education is imperative to upskill or reskill existing employees to effectively leverage AI technologies within the organization.

9. Engage with policymakers

Engagement should be focused on shaping the future of GenAI and LLMs, ensuring that they are used in a responsible and ethical manner.

Conclusion

The responsible AI executive roundtable proved to be highly productive and insightful, fostering open dialogue and collective discussion. The discussion topics helped executives explore the complex challenges and opportunities associated with the responsible use of AI in their organizations.

While these themes are not exhaustive, they demonstrate the range of considerations for AI impacts. Responsible AI is a dynamic field. As technology and AI applications evolve, these themes will likely expand as we adapt to new challenges and considerations.

Slalom is a global consulting firm that helps people and organizations dream bigger, move faster, and build better tomorrows for all. Learn more about Slalom’s human-centered AI approach and reach out today.

--

--

Anirudh Gupta
Slalom Data & AI

Slalom's Northern California Managing Director for Data and AI. Responsibly driving impact on organizations and people they serve using technology and data.