Empowering the Enterprise with Generative AI

Building a robust governance framework for ethical and responsible AI adoption

Bryan J Kissel
Slalom Data & AI
7 min readAug 18, 2023

--

Photo by Alexander Suhorucov from Pexels

By Bryan Kissel and Matt Pollard

Welcome to the Thunderdome

In the fast-evolving landscape of AI technology, generative AI (GenAI) holds immense promise in transforming organizational operations. GenAI is creating new potential for improving insights, driving efficiency, and delivering unparalleled value to data-science-driven operations across industries. Beyond mere optimization, its innovative capabilities hint at a future brimming with untapped opportunities that beckon serious exploration.

The power of GenAI brings up some obvious questions, and with emerging legislation threatening to tighten AI operating models across the globe, the need to align AI initiatives with ethical principles, regulatory requirements, and data privacy safeguards is critical to sustainable investments in this disruptive technology.

Dial-up democracy: Governance frameworking

At the heart of an effective governance model lies a foundational emphasis on data privacy and protection. The Slalom Data Responsibility and Privacy (DRP) practice has designed an accelerated approach to building governance models that prioritizes the responsible handling of data. By addressing the core principles of Privacy by Design and Security by Design, we can build a trustworthy foundation that fosters trust between us, our clients, and our client’s customers.

Through this governance model, organizations not only ready their AI investments to meet a highly volatile regulatory landscape but also embrace the ethical considerations that must guide the AI development process. By placing accountability, transparency, and fairness at the forefront of AI initiatives, organizations can distinguish themselves as leaders in responsible AI adoption and innovation.

Pillars, not pillagers: Setting the governance stage

Adherence to existing data privacy regulations and information security frameworks is the first non-negotiable requirement of an AI governance framework. All data processing activities — including AI processing — must align with relevant requirements to maintain compliance with existing standards. Privacy, Security, and Ethical by Design methodologies are required in some localities, and are best practice everywhere else. This dictates that highly documented privacy impact assessment (PIA) and regulatory impact assessment (RIA) processes are executed at each stage of the solution lifecycle.

Concepts of purpose limitation, data minimization, data obfuscation, risk mitigation, and data protection principles have been modernized in recent years, reducing the appeal and viability of a minimum viable compliance product. As the threat landscape continues to evolve, it is critical to establish a holistic and flexible governance framework that can meet the immediate requirements of existing legislation while satisfying the needs of the business.

Parliament of protocols: The regulatory representatives

An AI governance framework should incorporate all relevant regulations and standards that apply to generative AI. This includes compliance with GDPR, CCPA, HIPAA, and other industry-specific regulations. A robust compliance framework should ensure AI initiatives remain within legal boundaries while fostering a culture of ethical and responsible AI practices.

Risk management is an ongoing process and must be part of a system of continuous monitoring and evaluation to track the effectiveness of risk mitigation strategies. Regular risk assessments can allow the organization to stay ahead of potential threats and remain compliant with evolving regulations.

Yet, while robust security measures are essential, they alone aren’t enough. Even the most prepared entities can experience breaches. Response plans and clear communications protocols are crucial. In many jurisdictions, they’re not just best practice, they’re mandatory. Virtually all legislation around the world stipulates requirements concerning response plans.

Having a well-defined incident response plan is a common requirement across regions, and it is essential to outline the actions to be taken, responsibilities, communication protocols, and measures in place to mitigate perceived and realized impacts. A prompt and well-executed incident response plan will minimize damages and potential liabilities.

Civic construction: Building the regulatory republic

Regulatory compliance and risk mitigation are critical aspects of a functional GenAI governance framework. The National Institute of Standards and Technology’s (NIST) Artificial Intelligence Risk Management Framework (AIRMF) currently serves as a baseline for managing risk and establishing a trustworthy AI process.

In addition to the AIRMF, globally recognized information security and data protection standards, like the NIST 800 and International Organization for Standardization’s (ISO) 27000 families of standards, allow entities to standardize the identification of risk, classification of risk, and management of risk in familiar ways.

Compliance with regulatory requirements and effective risk mitigation plans are the cornerstones of a compliant GenAI implementation. By aligning the AI governance framework with the AIRMF, conducting data protection impact assessments (DPIAs), and prioritizing risk identification and mitigation, organizations can improve data monetization opportunities while helping reduce risk exposure. Continuously monitoring risks, staying up-to-date with regulations, and implementing a robust incident response plan will assist in the navigation of the evolving AI landscape confidently while helping safeguard the enterprise.

Declaration of digital discernment: Explainable AI

Transparency and explainability are also key pillars of stakeholder trust and regulatory compliance. By providing clear and concise explanations of AI systems’ processing, outcomes, and data lifecycle(s), practitioners can instill confidence in AI systems while simultaneously hardening compliance posture. Contextualizing the mechanisms influencing AI outputs allows system stewards to effectively manage bias, discrimination, and profiling activities. Choosing and developing AI models that are inherently explainable supports compliant operations and enables informed consent management.

Transparency and explainability can also benefit from improved human-AI interaction. Involving human experts in the AI decision-making process grants valuable insight into how AI models interpret data and make decisions. Human experts can validate AI outputs, challenge assumptions, and provide context.

Understanding the features and representations that influence AI model decisions is essential for explainability, and solution owners should focus on using interpretable features and representations in AI models, so it is easier to understand the factors driving the AI-generated results, helping build confidence in the reliability of AI systems. Providing users with explanatory interfaces that allow them to interact with AI models and access detailed explanations for AI-generated outputs further improves this confidence.

Transparency includes addressing bias and fairness concerns in AI models, both internal and external. Organizations are responsible for implementing measures to identify and mitigate biases in data and AI algorithms, provide explanations for the steps taken to ensure fairness in AI decision-making, and allow for open communication channels with data subjects, data customers, and stakeholders.

The Federalist files: Crafting the documentation doctrine

Maintaining comprehensive documentation of AI model development, training data, and validation processes is essential. Regular auditing of AI systems verifies that they continue to adhere to transparency and explainability standards. Documentation and auditing support compliance efforts and demonstrate a commitment to responsible AI practices. Finally, strong, deep external validation and review procedures from independent experts or third-party partners can improve the credibility of and confidence in existing AI solutions.

Publication of these efforts, related findings, and mitigation strategies for public consumption is a requirement in many regions, and a great best practice everywhere else. Seriously, this kind of conscious and intentional transparency is the new normal and will eventually be expected of data customers, stakeholders, and regulators.

Equal bytes for all: The digital declaration

Promoting equity-enhanced processing is a requirement of the AIRMF and fosters a culture of fairness and inclusivity by embedding harmful-bias management processes into AI systems. Proactively managing systemic, computational, statistical, and cognitive biases enhances equity in GenAI outputs, ensuring the inclusivity of diverse datasets. Equity is a foundational principle of responsible AI adoption; a commitment to promoting fairness and mitigating bias will not only uphold ethical standards but also strengthen relationships with customers and stakeholders. Identifying and mitigating bias in AI models is the first step toward promoting equity-enhanced processing.

Promoting equity is an ongoing effort, and continuous bias monitoring is vital to ensuring that AI models remain fair over time. Implementing automated tools and techniques to monitor AI-generated outcomes for potential biases is an option, as well as creating inclusive AI development teams for promoting equity-enhanced processing. We should strive for diverse perspectives and experiences within our AI teams — full stop.

Inclusive teams can lead to more thoughtful and equitable AI solutions, conduct focused evaluations to assess the potential impact of AI-generated decisions on vulnerable populations, and improve our understanding of how AI outcomes affect marginalized communities. Transparency around efforts to promote equity is a social and regulatory duty; publishing the approach, progress, and challenges related to equity-enhanced processing demonstrates a commitment to responsible AI adoption and conforms to the AIRMF.

The accountability amendment: Model mandates

Model development accountability, data sourcing, processing, and performance are unique considerations for organizations attempting to develop their own AI models; they should not be discounted. The governance organization should implement rigorous testing, validation, and quality assurance procedures to ensure that AI models are accurate, reliable, and free from potential biases.

Comprehensive documentation of data sources and preprocessing steps is vital for accountability. Establishing and documenting model performance metrics is essential for evaluating AI model effectiveness. These commonsense considerations are critical components of trustworthy AI system development.

The governor’s goodbye: Concluding the code convention

One of the most important and misunderstood considerations for an AI framework is inherited from the GDPR and CCPA — the concept of data subject consent and legitimate processing. Respecting the rights and preferences of data subjects is not only an ethical imperative in 2023 but also a requirement under several data privacy regulations. Components to providing informed consent generally provide that a data subject must be informed as to the lifecycle, use, processing, and protection of critical data elements during the entirety of custodial control. This reinforces the importance of transparent and explainable modeling as a critical component of informed consent.

Creating a functional AI governance model is not a small or simple task. Ethical, compliant, and responsible AI practices can help the enterprise safeguard data privacy, mitigate risk, and promote innovation while maintaining a flexible and defensible compliance posture. Emphasizing data privacy and protection, regulatory compliance, and risk mitigation is already part of a regular modus. Enhancing these practices through intentionally transparent and explainable AI processing activities, born from informed consent and granular preference management is simply new ways to address considerations of consent, fairness, and legitimacy.

Future-proofing AI investments and improving the sustainability of AI programs require establishing a comprehensive governance framework, partnering with external experts, and driving with an Ethical by Design methodology to demonstrate a dedication to responsible AI practices and data-driven innovation.

Slalom is a global consulting firm that helps people and organizations dream bigger, move faster, and build better tomorrows for all. Learn more about Slalom’s human-centered AI approach and reach out today.

--

--