Guide to detect and prevent AI discrimination launched

Michael Puntschuh
Ethical AI Resources
4 min readNov 20, 2023

--

by Helga M. Brøgger and Michael Puntschuh

Kathinka Theodore Aakenes Vik from LDO presents the new guidelines

The danger of discrimination is among the biggest risks when artificial intelligence is used to make decisions about individuals — for example who should receive social security benefits, health care or credit loans. At the same time that the use of AI is increasing, surveys show that there is low awareness and little competence about discrimination in both the public and private sectors.

“It’s about framing the use of this technology according to the values we we hold dear”

Karianne Tung, Minister for Digitalisation and Governance, Arbeiderpartiet

The Norwegian Equality and Anti-Discrimination Ombud (LDO) envisions a society where power and influence is equally distributed, freedom is available to all, and dignity is inherent to each individual. To help organizations using or building AI systems to achieve this goal, the Ombud recently launched a guide on how development teams can assess the risk of discrimination in AI systems.

“Law and technology are far too important to be left solely to lawyers and technologists”

Bjørn Erik Thon, Equality and Anti-Discrimination Ombud

The guide is currently in Norwegian, but LDO has already announced they are translating it into English soon.

Kathinka Theodore Aakenes Vik and her brilliant team at Likestillings- og diskrimineringsombudet are more than able to navigate the landscape between technology and law in an exemplary manner — giving sound advice on development and use of AI — thus framing the use of the technology according to the values we hold dear.

During the launch event, Kathinka Theodore Aakenes Vik gave a quick introduction to the regulatory situation on discrimination as it relates to AI:

All people should have the same rights. Individuals should be assessed on an individual basis, not based on their group characteristics. When AI systems are used to make decisions about people, that’s exactly what happens — decisions are made based on group characteristics. This can be a problem under anti-discrimination laws. Discrimination is illegal differential treatment related to one or more grounds for discrimination.

Three central questions define whether a discrimination has taken place:

  1. Does the AI system have a function that results in differential treatment?
  2. Is the differential treatment related to a ground for discrimination (protected group)?
  3. Is the differential treatment unjustified?
Protected groups under Norwegian Anti-Discrimination Laws. Source: LDO (2013), Innebygd
diskrimineringsvern, own translation

The guideline is a great tool that enables organisastions to actively work to prevent discrimination when developing or using AI tools. It defines typical challenges in each phase of the development and use of artificial intelligence systems. To address there challenges, it provides relevant questions that those responsible for development need to provide explicit answers to. Real-life examples further illustrate the issues. Overall, the guideline helps businesses identify risks that may arise even when they were not so easy to see at first glance.

The guideline is structured into the following phases of AI development and deployment:

Planning

  • Define purpose and intended use
  • Map consequences at the individual and societal levels
  • Involvement of affected groups

Training data

  • Is the data representative in relation to the model’s purpose?
  • Consequences of inadequate representation?

Model development

  • To what extent does the calculation correlate with the overall purpose?
  • On what variables does the model’s calculation rely, and why are these relevant?

Testing

  • How does the model perform against success criteria?
  • How are representatives of the affected groups involved in the testing phase?

Implementation and oversight

  • Can discriminatory calculations performed by the system be compensated for?
  • How is real human review ensured?
  • How are the interests of those affected by the system taken into account?

Thank you to Inga Strümke Norwegian University of Science and Technology (NTNU), Hanne Pernille Gulbrandsen Deloitte, Robindra Prabhu NAV, Kathy Lie SOSIALISTISK VENSTREPARTI and Alfred Bjørlo Venstre, Norges Liberale Parti for good reflections and discussions during the launch of the guide on how development teams can assess the risk of discrimination in AI systems!

I, Helga, was also delighted to have had the opportunity to participate in this important and rewarding work and to advise in the development of this guide, together with Inga Strümke Iris Bore Rita Gyland Jacob Sjødin NAV Dag Elgesem University of Bergen Vera Sofie Borgen Skjetne Bufdir (The Norwegian Directorate for Children, Youth and Family Affairs) DNV.

--

--

Michael Puntschuh
Ethical AI Resources

Technology & Human Rights. Board Member @ Elektronisk Forpost Norge on all things AI