Implementing IAMA, a step towards Responsible AI

Roxanne Boehlé
Sopra Steria NL Data & AI
6 min readAug 11, 2023

Lees dit artikel in het Nederlands

In the emerging digital world of algorithms and artificial intelligence, the boundaries of our human rights are constantly being tested. The European Union is adopting legislation to address the risks of AI. The legislation, known as the EU AI Act, includes a number of measures to improve the safety and reliability of AI systems, including the obligation to carry out impact assessments and take into account the impact of AI systems on human rights. The EU AI Act is a major step forward in regulating AI. The need for Responsible AI expertise is rising. How to approach implementing responsible AI systems which are working according to the upcoming legislation? IAMA will help to understand the impact of AI systems on our fundamental rights.

Photo by Jordan McDonald on Unsplash

In this blog, we shine a spotlight on the Dutch ‘Impact Assessment Mensenrechten en Algoritmes’ (IAMA) methodology (‘Human Rights and Algorithm Impact Assessment’). The IAMA methodology can be used to carry out impact assessments in accordance with the EU AI Act. The methodology can help to identify and mitigate risks of AI, and ensure that AI systems are used in a safe, ethical and just manner. Not only can you identify risks, but you can find the balance of added value of AI systems and the risks to gain perspective on proportionality. It can be a crucial tool for evaluating the impact of algorithms on our privacy, non-discrimination, freedom of expression, and other rights. We will explore how IAMA gives us a way to an ethical and just use of algorithms. However, the IAMA process can be challenging to implement in practice. In this blog, we will explore the IAMA and the potential challenges to overcome.

Tell me more about IAMA!

IAMA is a tool for discussion and decision-making, made for government agencies. Basically, it is a predefined list of questions that should be answered before, during and when finalizing an algorithm in an organization. It enables interdisciplinary dialogue by those responsible for the development and deployment of an algorithmic system. The IAMA methodology is not just for governments. It can be used by any organization that is developing or using algorithms, including businesses, nonprofits, and academic institutions. The best way to cover all bases is to have a diverse team with different expertise discuss aspects of the algorithm implementation.

The IAMA framework consists of four phases:

  1. Why? This phase investigates the reasons why an algorithm is developed or deployed. The team must think about the goals of the algorithm, the needs of the users, and the potential impact on human rights.
  2. What? This phase investigates the features of the algorithm. The team must think about the data that the algorithm uses, the way the algorithm works, and the potential outcomes.
  3. How? This phase investigates the way in which the algorithm is deployed. The team must think about the procedures and processes that are used to develop, test, and implement the algorithm.
  4. Human rights This phase investigates the impact of the algorithm on human rights. The team must think about the potential risks to human rights, the measures to mitigate these risks, and the way in which human rights are protected.

During each phase an expert and other parties will discuss the proposed questions. There needs to be a balance between stakeholders. For example, in phase 2 and 3, a developer or data scientist will be able to tell a lot about the technical implementation, in which the data scientist will be a source of information during the discussion. While in phase 4, the lawyer will have the right knowledge that can be shared among the other stakeholders to start the discussion. Through the IAMA, each role is asked to contribute its knowledge, and because all this knowledge comes together in the discussions, it can be well tested whether an algorithm is meaningful and ethically responsible.

Great! What challenges can I encounter?

Implementing a new framework always has startup costs and often faces the usual challenges, such as:

  • Time: The IAMA process can be time-consuming, especially if you are not familiar with the framework. It is important to factor in the time required to carry out the IAMA when planning your project.
  • Knowledge: The IAMA process requires knowledge of AI, human rights, and the IAMA process itself. It is important to make sure that your team has the required knowledge to carry out the IAMA process.
  • Resources: The IAMA process can be resource-intensive, especially if you need to hire external consultants or experts. It is important to make sure that you have the resources available to carry out the IAMA process.

More specific problems that you may encounter when implementing the IAMA framework include:

  • Determining completeness of the answers: The questions that are discussed in the IAMA must be completed. However, the answers do not indicate what a good answer should be. An IAMA can be considered “good enough” if it includes a thorough and comprehensive analysis of the potential impact of the algorithm on human rights. It must identify and assess relevant aspects and consequences, including potential risks and negative impact. Additionally, all relevant actors must be aware of the answer and have thought about it.
  • Defining algorithms in context: Defining an algorithm that must go through the IAMA process at your organization. With self-learning algorithms, this is often clear. You input data and get predictions from an algorithm based on this input. However, non-self-learning algorithms can be more difficult to define. Very strictly speaking, an Excel formula is already an algorithm, and you don’t want to have to fill out an entire document for every simple query that processes non-personal data. Therefore, it is important that your organization has a clear agreement on what algorithms are, and when they pose a significant risk to human rights to undergo an IAMA process.
  • Finding the right process to implement the IAMA framework. When you start filling out the IAMA for an algorithm, you will find within your organization that some questions get the same answer for every algorithm. For example, if you always develop algorithms under the same security conditions, you can reuse this answer. This type of standard questions within your organization can be dealt with once, and then you do not need to have the same discussion again every time. It is also important that within the current workflow of developing an algorithm, you decide at the right times to have the right discussions with the right relevant actors. At the beginning, this will take some extra time, but in the long term it will go more smoothly and faster. The involvement of diverse stakeholders will learn along the way how to represent their role so they can contribute to a more balanced and holistic evaluation.
  • Documentation and version control: Once the IAMA is filled out, you can store and process the results in different ways. Is filling out the IAMA only intended for implementation and auditing, or does the organization also find it important to be able to read the results of the IAMA regularly? It is easy to imagine that users want to find the operation and risks of the algorithm easily. It is therefore also recommended to process the results in a wiki or algorithm library. It is also common for algorithms to be updated over time. Think of MLOps solutions. Does the IAMA need to be updated? How do you show that in the documentation and can you show the impact of the algorithm over time?

Within Ordina we focus on helping customers solve business problems using data science. Implementing IAMA in your current workprocesses can be challenging and time consuming but will be worth the invested effort. Your organization will become more ethically aware of the impact of your work.

Interested in the topic? Lets discuss!
Leave us a comment and/or follow us on
Medium.

--

--