Navigating Muddy Waters: A Framework for Ethical AI Governance Inspired by the EU AI Act

Raul Vizcarra Chirinos
LatinXinAI
11 min readMar 7, 2023

--

Have you ever felt like you’re swimming in murky waters when it comes to understanding AI? There is an old Chinese proverb that says, “Muddy waters make it easy to catch fish”. This means that in a context of opacity and lack of clarity, there is ground for opportunistic or self-benefiting behaviors. While some of us struggle to understand those “black boxes” that affect our daily lives, some people see that opacity on AI as a calling for more transparency and regulation.

Balancing innovation and the development of AI with ethical and societal considerations always raises the classic chicken-and-egg dilemma, do we require legal enforcement in the form of obligations, requirements, and metrics for transparent, explainable, and accountable AI-based solutions? Or is it possible to achieve Explainable AI as an outcome of self-regulation initiatives by companies?

AI regulations should not be the sole reason for removing the opacity that characterizes some AI-based solutions today. Perhaps I am naïve or biased having worked in the private industry for so many years, but I believe that self-regulation initiatives in companies can play a role in AI governance models and help prevent societal harms (we’ll explore later why addressing societal harms is more complex than individual harms).

AI governance frameworks and incorporating ethical considerations into them are not new concepts (e.g., Deloitte, Rick Qiu, Siau/Wang 2020 to name a few). However, some AI governance frameworks may focus purely on technical approaches (designing, testing, deployment), while others may only provide a set of ethical definitions and values with limited guidance for actual deployment. In this article, we will discuss a self-regulated framework that incorporates certain tools (I call them instruments) to help facilitate ethical and responsible AI development; for this, we will use some insights from the EU Regulation on Artificial Intelligence (“AI Act”), that could make its way in 2023. But before we dive into the insights provided by the EU AI Act, it’s essential to understand the intersection of law and ethics and why this understanding is crucial for building an effective Ethics AI Framework.

The Intersection of Law and Ethics

Most of us agree that everyone on this planet deserves a rightful place in society where to contribute and be respected. To achieve peaceful coexistence, we have established a set of rules and norms that govern our behavior in society. Some of these rules are enforced by law, while others are defined by moral behavior and ethics. However, both law and ethics can be complex because there are different schools of thought within each discipline on how to define what is right or wrong. This complexity becomes especially important when developing any Ethics AI Framework for AI. The approach we take in understanding how law and ethics work will define the framework that guides the development and use of our AI-based solutions.

In legal theory, Formalism and Realism are two of the main schools of thought. Formalism focuses on resolving disputes by defining the terms of legal rules (Huhn,2003); Legal reasoning is driven by logic, what matter is what the law says than what it could or should say or for whom. Realism, on the other hand, seeks to fulfill the values that the law is intended to serve (Huhn,2003) and takes into account the context and human approach. Judges consider not what the law says, but also social interests and public policy when making decisions (Cornell Law School).

Let us now turn to ethics and also explore two distinct schools of thought: Deontology and Teleology. The Deontological school focuses on actions and rules (kind of similar to formalism); it establishes rules and only actions that comply with those mandates are considered moral or ethical (certain things are always right while other things are always wrong). On the other hand, there is the Teleological school, which is concerned with the final effect and not a set of very strict rules. In this context, the phrase “the end justifies the means” applies easily, actions can be moral or immoral depending on the circumstances.

Why is this important when designing or deploying AI-based solutions? Let’s look at Figure 01 and see how Law and Ethics intersect.

Figure 01 Source: Author’s own creation

Would you define your Ethics AI Framework as a single set of rules for each action regardless of context? Or, would you consider actions/consequences, the human component and the context?

As you can see, a strictly definition-, norms-, rules-, and values-oriented approach could steer our framework towards a “one size fits all” model and may lead to biases against groups or actions that do not fit into this set of rules. On the other hand, a more human-centric approach that considers the context and the effect of actions as predominant in defining our “rights and wrongs” could lead to a framework too abstract that leaves room for generalization and ambiguity, ultimately leaving it up to the development teams to define these policies based on their own judgment, values, and understanding of what is right and wrong, potentially incorporating also their own biases.

From Individual Harm to Societal Harm

Now that we have discussed what to consider when defining our rights, wrongs, rules, and moral values around them, let us turn our attention to consequences. Decisions can have both positive and negative effects, some of which may harm us directly or others, either directly or without us even realizing it.

In 2021, Nathalie A. Smuha published a paper that distinguished three types of harm that can arise from AI: individual harm, collective harm, and societal harm. She emphasizes the importance of shifting from an individual approach to a societal approach when creating measures or legal instruments to prevent the negative effects of AI-based solutions. For each type of harm, Smuha established the following definitions:

· Individual harm occurs when one or more interests of an individual are wrongfully thwarted. For example, a biased AI Credit scoring system leads to wrongful discrimination against a woman.

· Collective harm occurs when one or more interests of a collective or group of individuals are wrongfully thwarted. Following the previous example, the biased AI Credit scoring system thwarts the interests of a collective of people: women.

· Societal harm occurs when one or more interests of society are wrongfully thwarted. It concerns harm to an interest held by society as a whole (above the sum of individual interests). In our example, the societal interest in jeopardy is equality.

So why should we prioritize a societal harm approach when creating an Ethics AI Framework? According to Smuha, we underestimate the way in which data obtained from individual A can subsequently be used to target individual B (data’s relationality). In our example of the biased AI Credit scoring system, even if the consent from those whose data is collected is obtained (as required by GDPR) and fix any bias in the system (addressing individual harm), we still leave many unprotected from potential indirect harm caused by the data gathered. By considering societal harm, we can better address the potential negative impact of AI-based solutions on society as a whole, including both users and those who have not used the AI-based solutions.

The EU Regulation on Artificial Intelligence (“AI Act”)

It’s not certain if the European Union’s (EU) Regulation on Artificial Intelligence (“AI Act”) will see the light of day by the end of 2023, but if it does (I’m saying if), it’s more than possible to expect two potential effects that concern those of us who live outside the European Union. Firstly, it could lead to other countries taking up the regulation debate on AI. Secondly, it might become a model for other countries to follow like the General Data Protection Regulation (GDPR).

Since its publication in April 2021, the EU AI Act has undergone a long journey that dates back several years. A complete timeline of this journey can be found on this section of the European Commission’s website. To fully grasp the scope of the Act, be prepared to spend a “nice weekend” reading through its 108 pages, annexes, impact studies, and reviews. It’s not exactly light reading material, and you may need a few coffee breaks to stay awake. If you’re looking for a condensed version, “A (more) visual guide to the proposed EU Artificial Intelligence Act” by Nikita Lukianets provides a handy summary. In Figure 02 I tried to break down the AI Act into its various components and made an exercise of distilling it into a “single sheet paper” format.

Figure 02. Source: EU AI Act

Here are some key components of the EU AI Act that could be part of an ethical AI governance framework:

01) Definitions: Lack of clarity may lead to uncertainty, defining concepts like AI, Risk, Harm, and High Risk is crucial, as they will determine the scale and prioritization of supervision actions and the direction of your framework.

02) Risk-Based Approach: Although there is no consensus on the approach yet, is a starting point. Again, the main element is in the definitions and recognizing harms, not just from an individual but also a societal perspective.

03) Risk Management System: If you liked the Risk-Based Approach, you may consider implementing a Risk Management System. This system consists of a group of iterative processes and instruments that aim to prevent or manage any potential harm throughout the entire lifecycle of an AI-based solution, particularly for those that are categorized as High-Risk based on the Risk-Based model you define.

04) Instruments: These are the sets of obligations, requirements, and good practices that the EU AI Act divides into different articles to ensure transparency, explainability, and accountability of any AI-based solution from design to market launch and monitoring.

05) Innovation Sandbox: The EU AI Act creates a space where innovative AI can be developed, tested, and validated, which is a positive step forward. Striking a balance between innovation and regulation will bring us closer to the kind of AI that we all hope help us improve our society.

A self-regulated AI Governance framework (a work in progress)

In a world where the necessity and feasibility of establishing AI regulation is still being debated, companies can adopt a self-regulated framework to achieve transparent, explainable, and accountable AI. The EU AI Act its not perfect and work still has to be made, but based on its components, I try to illustrate with Figure 03 how companies can take proactive steps to ensure that their AI-based solutions are ethical and aligned with societal values. This effort can build trust with stakeholders and contribute to the responsible development and use of AI, without relying on external regulations or oversight.

Figure 03. Design by: Raul Vizcarra Chirinos

Here is a breakdown of how to understand the Framework proposed adapted to the AI development lifecycle:

Assessment: Before designing an AI-based solution, identify potential conflicts with your framework’s values, risks to society, and what can be developed under your code of conduct and ethics guidelines. Create an assessment checklist to classify potential risks.

Design: Use the risk-based assessment to create an AI portfolio, which identifies solutions to be developed under your High-Risk Management System (HRMS) and those that adhere to your code of conduct and ethics guidelines. For new AI solutions that do not fit the risk-based approach, create an internal sandbox environment for controlled testing (not for market deployment).

Development/Testing: Low-risk AI solutions will only need to meet information and transparency requirements, while High-risk AI solutions must comply with the HRMS requirements. The HRMS should have instruments such as procedures to collect and preserve technical documentation, traceability (logs, teams involved in verifying results), data governance policies for training, validation, and testing of models, as well as performance (accuracy, robustness, etc.).

Monitoring: Establish a post-market plan to collect and analyze data on your AI-based solution’s performance and changes over its lifetime. This will allow the company to evaluate continuous compliance with the Ethical AI Governance Framework. Also ensure transparency by providing instructions, limitations, and performance access to end-users for safe and effective use of the AI-based solution deployed into the market.

Keep in mind that the AI Ethics & Values Company Guideline, the Definitions Dictionary, and the Code of Conduct form the backbone of the governance framework. However, it’s also crucial to consider the impact of the different ethical approaches on the framework as a whole. As we’ve seen before, there are various schools of thought regarding ethics and law that can shape how compliance with a set of rules, norms, and values is enforced. For example, those who align with a Teleological or Realism perspective may prioritize human oversight throughout the lifecycle to ensure that AI-based solutions take context into account and avoid “one size fits all” approaches. Conversely, those who prioritize Deontological or Formalism approaches may prioritize clear definitions and boundaries. Finally, the Company’s AI Council will be the guardian of the AI governance framework — a multidisciplinary committee whose job is to encourage innovation through AI while ensuring that AI-based solutions deployed by the company are developed in a manner that avoids causing societal harm.

Of course, this exercise is a “work in progress” framework, and suggestions will always be welcome.

Final thought

“War is Over (If You Want It)” was embraced by John Lennon as part of his peace campaign in 1968–1969 against the Vietnam War, but it was also a call to action, a reminder that meaningful change is within our reach. Similarly, whether companies are preparing for future AI regulations or are simply committed to ethical AI, we can say that; “Explainable AI is Possible (If You Want It)”.

References

[1] Documents and Timelines: The Artificial Intelligence Act (part 3). Kai Zenner

[2] Artificial intelligence act BRIEFING.EPRS | European Parliamentary Research Service January 2022

[3]CEPS webinar -European approach to the regulation of artificial intelligence. Lucilla SIOLI.2021

[4 ]A (more) visual guide to the proposed EU Artificial Intelligence Act. Nikita Lukianets.2021

[5] Beyond the individual: governing AI’s societal harm.Nathalie A. Smuha, Faculty of Law, KU Leuven, Belgium.2021

LatinX in AI (LXAI) logo

Do you identify as Latinx and are working in artificial intelligence or know someone who is Latinx and is working in artificial intelligence?

Don’t forget to hit the 👏 below to help support our community — it means a lot!

Thank you :)

--

--

Raul Vizcarra Chirinos
LatinXinAI

Data Science I ML I AI I Passionate about discovering new ways to build a better world through Data and Technology. https://www.linkedin.com/in/raulvizcarrach