What is AI risk and how to manage it

Understanding AI Risk and how IBM’s AI Governance can help manage and mitigate the risk

Siddhi Gowaikar
4 min readFeb 28, 2023

Artificial Intelligence (AI) applications have become mainstream today, right from ChatGPT for public usage to the more niche ones like those in healthcare or finance domains. The organizations using such AI applications have the responsibility to make sure that such AI models are explainable, responsible, trustworthy, and free from harmful bias. AI adoption has increased and so has its potential to have huge implications on the society — positive as well as negative. Thus, it becomes imperative that there are processes, tools and practices to manage the AI risks to avoid any adverse effects arising through its usage.

Organizations have the need to identify and manage risks for their AI applications effectively

The National Institute of Standards and Technology (NIST) in the U.S. recently published a framework for AI Risk Management, called the AI RMF — Artificial Intelligence Risk Management Framework. Risk arises throughout the AI lifecycle, and using the framework various stakeholders in the process like data scientists, model developer, model validator, and even the C-suite executives influence how this risk can be managed.

What is AI Risk and AI Trustworthiness?

The AI RMF outlines some characteristics of a trustworthy AI system:

  • valid and reliable — how accurate and robust is the AI system
  • safe — does it endanger any human life or the environment
  • secure and resilient — can it withstand adverse events
  • accountable and transparent — does it provide information about the AI system and its outputs like training data, structure of model, information on decisions taken during and post deployment, etc
  • explainable and interpretable — can it provide information on how and why a decision was made by the AI model or system
  • privacy enhanced — does it provide norms and practices to help safeguard human autonomy, identity, and dignity
  • fair with harmful bias managed — can it identify systematic, computational or statistical, and human-cognitive bias and provide ways to address concerns for equality and equity

AI Risk arises when these characteristics do not meet certain standards. These characteristics are interdependent and trade-offs need to be identified by all the stakeholders based on the specific use-case. This will help the organization in customizing the framework for their needs.

How to manage AI Risk?

The AI RMF Core talks about four functions that provide a guidance to organizations to manage AI risks and develop trustworthy and responsible AI systems. The 4 functions of AI RMF Core are:

  • Govern: provides guideline on the structure, process and other activities to anticipate, identify and manage risk
  • Map: enhance the organization’s ability to identify risks to make informed decisions about whether to go ahead with design, development or deployment of the AI system, or not
  • Measure: AI systems should be tested before deployment, as well as while in production. This includes monitoring metrics for trustworthiness and other applicable measures
  • Manage: Organizations can manage AI risks with risk treatment plans to respond to, mitigate, and communicate about incidents
AI RMF Core — Image credits: NIST — AI RMF Playbook

The AI RMF also provides use-case profiles for specific applications outlining the specifics of the AI RMF Core functions and its relevant details.

AI risk management and governing practices are largely dependent on the organization, including the stakeholders. IBM AI Governance helps set up the processes to address and implement various risk management activities. Using IBM AI Governance, organizations can:

  • ensure model risk governance by setting up workflows for the entire AI lifecycle
  • perform risk assessment to map and identify risks to the AI system
  • track and monitor various metrics and meta data of the model to measure its trustworthy characteristics like fairness, explainability, model quality and model drift. Automatically capture meta data and generate fact sheet documents for added accountability and transparency throughout the AI lifecycle.
  • manage risks and incidents that arise by getting alerts, and taking appropriate actions to remediate issues to the AI system

If you want to learn more about IBM AI Governance, want to try a demo, or discuss further with an IBM rep, join us at:

Content on NIST’s AI RMF sourced from the AI RMF Playbook by National Institute of Standards and Technology

#AIGovernance #responsibleAI #explainableAI #trustworthyAI #AIRisk #IBM #IBMWatson

--

--