Responsible AI — Risks, Regulation and Reality

Manav Gupta
8 min readMay 2, 2024

--

Many a clients have recently reached out to me asking about risks related to AI, risks if an organization does NOT invest in responsible AI, quantification of said risks (if any), examples of AI related mistakes (and any incurred losses), costs for investing in responsible AI solutions, approaches to calculate ROI, etc.

Risks Related to AI

Figure 1: Risks related to AI

As the picture above shows, many of the risks related to AI are the same as in “traditional” data science — poor accuracy of prediction, uncertainty within the stochastic models, lack of explainability, vulnerability to a series of attacks (including poisoning, extraction & evasion attacks).

With generative AI some new risks are introduced, which are now well documented. From the most popular risk of hallucination, to lack of factuality or faithfulness, and the emerging risks from prompt injection attacks to extract data, jailbreaking of LLMs, etc.

Finally, there is emerging regulation and the demands it puts on enterprises to ensure appropriate oversight before any AI solutions are moved into production.

Challenges scaling AI

  • Lack of confidence in operationalizing AI with confidence — A wide variety of tools exists for AI governance — but too often, models are built without proper clarity, monitoring or cataloging. Without end-to-end AI lifecycle tracking using automated processes, scalability and transparent processes are hindered. Explainable results are elusive. Most AI models behave as “black box models” — which may be built by a third-party / vendor, but it isn’t always easy to trace how and why decisions were made, even for the data scientists who created them. These challenges lead to inefficiencies resulting in scope drift, models that are delayed or never placed into production, or that have inconsistent levels of quality and unperceived risks.
  • Difficulty in managing risk + reputation — Explainable processes and results help auditors and customers know how specific analytic results were reached. Such processes help ensure that results don’t reflect bias around race, gender, age or other key factors. These processes are critical for patient diagnoses and treatment plans, reviewing transactions flagged as suspicious, and loan applications that are denied.
  • Changing (or Emerging) AI regulation — AI regulation is emerging at a rapid pace. In some cases, the regulators are introducing new legislation without adequate clarity on different terms, yet the onus is on enterprises to interpet and implement adequate controls.
Figure 2 — Emerging AI regulation

What is Trustworthy AI?

Trustworthy AI is a way to develop, assess, and deploy AI systems in a safe, trustworthy, and ethical way. It considers the societal impact of AI technologies, including potential harms and benefits. Implementing AI responsibly can help guide decisions toward more beneficial and equitable outcomes. NIST offers are more complete definition of Trustworthy AI: Characteristics of trustworthy AI systems include: valid and reliable, safe, secure and resilient, accountable and transparent, explainable and interpretable, privacy-enhanced, and fair with harmful bias managed.

Some core dimensions of responsible AI include:

  • Fairness: How a system impacts different subpopulations of users (e.g. by gender, ethnicity)
  • Explainability: Mechanisms to understand and evaluate the outputs of an AI system
  • Privacy and security: Data is protected from theft and exposure
  • Transparency: Establishing a unified vision for AI throughout its lifecycle.
Figure 3 — Characteristics of Trustworthy AI (NIST)

Risks of not investing in responsible AI

This is just a rephrasing of the AI risks essentially.

  1. Safety risks: AI systems that are not designed with appropriate safeguards and robustness could potentially cause unintended harm. This could include things like autonomous vehicles making dangerous decisions, or AI systems being used for malicious purposes like cyberattacks.
  2. Privacy and security risks: AI models require large datasets for training, and since the early days of data science, there are concerns privacy, bias, copyright and other issues around the source data.
  3. Ethical risks: AI systems can perpetuate or amplify human biases around race, gender, age etc. if they are trained on biased data. There are also ethical risks around things like AI being used for surveillance, social scoring, or automating high-stakes decisions in areas like healthcare and criminal justice.
  4. Societal risks: The automation of jobs by AI exacerbates inequality and economic disruption if the transition is not managed properly. The concentration of AI capabilities in a few companies or nations could also exacerbate geopolitical tensions.
  5. Lack of accountability and control: As AI systems become more autonomous and complex, there are risks around lack of understanding of how they work, inability to exercise meaningful human control, and difficulty in attributing accountability.

Quantifying AI risks

NIST has developed the Risk Management Framework (RMF) Core to help organizations identify, assess, and mitigate risks associated with AI systems.

The RMF Core provides a structured approach to risk management that can be adapted to various AI systems and deployments. It emphasizes the importance of governance, which involves establishing policies, procedures, and oversight for managing AI risks. The framework also highlights the need for continuous monitoring and improvement of risk management practices.

The RMF Core consists of four core functions: Govern, Map, Measure, and Manage.

  1. Govern: The Govern function establishes the foundation for effective risk management. It involves setting the direction and providing oversight for AI risk management activities. This includes developing a risk management strategy, assigning roles and responsibilities, and establishing communication channels.
  2. Map: The Map function helps organizations identify the AI systems they use, understand how they work, and determine the potential risks they pose. This involves creating an inventory of AI systems, documenting their functionalities, and identifying potential threats and vulnerabilities.
  3. Measure: The Measure function involves assessing the likelihood and impact of potential AI risks. This may involve using qualitative or quantitative methods to evaluate risks and determine their severity. The goal of this function is to prioritize risks based on their potential impact on the organization.
  4. Manage: The Manage function focuses on developing and implementing controls to mitigate AI risks. This may involve implementing technical controls, such as security measures, as well as non-technical controls, such as policies and procedures. The Manage function also includes monitoring and evaluating the effectiveness of risk controls.
Figure 4- NIST RMF Core Functions and Tasks

Investment costs of Trustworthy AI

Addressing the biases AI outputs (predictive and generative) is a necessary step which requires substantial resources. This includes curating inclusive datasets, continually monitoring outputs and employing teams to oversee the AI’s operation — all of which add to the overall cost of implementation. Compliance is another cost lurking in the shadows of AI adoption. As new AI legislation takes hold, AI applications must be transparent, interpretable and accountable, with regular reviews and updates to maintain compliance.

The potential for liability arising from AI mistakes or misuse is an often overlooked but significant hidden cost. As AI systems increasingly influence decision making, the risk of financial loss, reputational damage and even legal repercussions can escalate. Ensuring that sufficient safeguards are in place can be a costly but vital part of deploying AI.

There are several estimates online for the cost of the EU AI act — most famously, the report from Center for Data Innovation estimated that EU’s AIA will cost the EU economy 31B EUR over the next 5 years. However there are plenty of rebuttals to that cost estimate as well.

In reality, the estimates for responsible AI that is safe, trusted, transparent will depend upon the use-case and the risk-level associated. For example, even in the CDI’s document: The CDI writes: “Based on the EU’s own impact assessment, a small business […] can expect total compliance costs of up to €400,000 for one high-risk AI product requiring a quality management system.”

(The key there is to define what is meant by “high-risk AI product”. Not all systems will NOT fall under this classification so actual costs will be lower.

Conversely, many leaders acknowledge that if they don’t develop, design or use AI responsibly, the cost to their company will be at least $1million — or jeopardize the business itself.

Figure 5 — EU AI Act “Pyramid of Risk”

Two other interesting things with the EU AI act is both the definition of “high-risk” and the mechanism for self-conformity.

The EU AI Act identifies AI systems as high-risk if they are used in biometrics, critical infrastructure (such as transport, could put the health and life of citizens at risk), education and vocational training (such as exam scoring, which may determine someone’s access to education and professional course of life), employment (AI systems used in employment, such as CV-sorting software for recruitment procedures, managing workers and access to self-employment), essential private and public services (credit scoring, for example), law enforcement (such as assessing an individual’s risk of becoming a crime victim), migration and border management, etc.

Looking at the list above, some enterprise systems may fall under “high-risk” (for example employment or training related applications). In those cases, enterprises are required to undergo a “self-assessment” which could be quite onerous. There are several tools available to perform such a self-assessment, including capAI from University of Oxford.

Measuring ROI of Trustworthy AI

Measuring ROI of AI programs typically measure the direct relationship between an investment in AI its economic return from relevant stakeholders. This return may come in the form of revenue generation, cost savings, or reduction of cost of capital, or even a combination of the three.

  • Investments: Investments in Trustworthy AI involve employee education and training, building compliant software tools, defining risk assessment and governance frameworks, or creating a center of excellence (COE), all of which establish and enforce ethical AI practices. Investments in AI ethics are clearly important, but can be quite costly.
  • Returns: Returns in trustworthy AI falls in tangible and intangible categories. Tangible returns are via increase in potential revenue that can be generated from each stakeholder from ethical / debiasted models, and cost avoidance (of regulatory fines). Intangible returns can be referred to as reputational returns, driven by the socially responsible reputation boost that an organization’s stakeholders form due to its investment in AI ethics.

How to get started?

  1. Define a strategy & vision
  2. Establish a governance approach for ethical AI implementation
  3. Integrate ethics into the AI lifecycle

--

--