AI Governance and Risks

Rachana JG
3 min readOct 6, 2022

--

DEFINITION

AI governance is the idea that there should be a legal framework for ensuring that machine learning (ML) technologies are well researched and developed with the goal of helping humanity navigate the adoption of AI systems fairly while reducing or nullifying the risks.

AI GOVERNANCE — MANAGING RISKS

THREE LINES OF DEFENCE
  1. CONTROL: Giving users some control over the decisions algorithms makes for or about them. Make sure there is human in the loop, give human the control over the model.

Example: In 2015, Facebook released mixed-style newsfeed controls.

  • Allowed users to decide whether they wanted more/less of particular news (relationship statuses, profile changes, specific friends, etc.)
User Control Experiment

Based on the experiment conducted by giving different levels of control on models to different groups of people, below were the results:

  • Users with no control had low trust
  • Users with even a little bit of control had very high trust
  • Users with little or a lot of control continued to have high trust

2. TRANSPERANCY: Users can also be uncomfortable with decisions made by algorithms if they feel they don’t understand those decisions. Giving them just enough idea about how the model works will help.

Transparency Study

Based on a study conducted, users provided with limited amount of transparency had the greatest trust compared to the ones who were given no or more extensive explanation.

Calibrated Transparency (Limited Transparency) includes:

  • Was an algorithm used to make a decision?
  • What kinds of data are used?
  • What variables are considered?
  • Global and local interpretability

3. AUDITS: Due to the risks automated decisions bring, ML should be designated as a distinct model type, with its own governance frameworks.

An audit process would begin with the creation of an inventory of all machine learning models being employed at a company as well as:
- The specific uses of these models
- The names of the developers & business owners of models
- Risk ratings of the social/financial risks if the model fails

For high-risk models, the audit should look at the three areas:

Machine learning models involve a variety of complex issues, such as bias, interpretability, and the fact that they are constantly retraining as they receive more data

SUMMARY

  1. The three lines of defense of applying governance for managing risks include Control, Transparency and Audit before the model deployment.
  2. People involved in the process include:
    - Control: Developers to have control on the model
    - Transparency: Data Science Quality Analyst to have calibrated transparency
    - Audit: Data Science Auditor for High Stake Models

REFERENCES

All the content in the article is taken from below references only:

  1. “A Human’s Guide to Machine Intelligence” by Kartik Hosanagar
  2. https://knowledge.wharton.upenn.edu
  3. https://www.techtarget.com

WHAT NEXT

Did you like my article?
If yes, follow me to get future updates; and provide your likes and comments.

--

--