“Ethics Guidelines for Trustworthy AI”: To Promote Human Dignity, Agency, and Flourishing
On 18 December, the European Commission’s High-Level Expert Group on Artificial Intelligence (AI HLEG) published a draft version of Ethical Guidelines for Trustworthy AI. Moreover, they invited citizens, the general public, as well as experts, to review this draft and to provide feedback via the European AI Alliance.
First, I would like to congratulate the AI HLEG with this document. It’s clear, it’s accessible, it’s thorough, and it’s practical. Let me sum up all the things I find brilliant:
They use ‘Trustworthy’ as an overarching term. I think this is brilliant. No matter how you conceptualize AI–as ‘general AI’ or ‘narrow AI’, as ‘AI in autonomous systems’ or ‘AI as a tool to advance agency of humans’–we can all relate to the need for AI that is worthy of our trust. You want a trustworthy AI similar to how you want a trustworthy car, a trustworthy drilling machine, a trustworthy babysitter, or a trustworthy partner.
They explain the relationships between rights, principles, and values. Rights provide the “bedrock” for formulating ethical principles. And in order to uphold these principles, we need values. Moreover, we need to translate rights, principles, and values into requirements for developing AI systems. Putting rights, principles, and values into these relationships provides clarity, which is direly needed for a constructive discussion of ethics. They discuss the following rights, principles, values, and requirements:
- Rights: respect for human dignity, freedom of the individual, respect for democracy, justice and the rule of law, equality, non-discrimination, and solidarity, including the rights of people in minorities, and citizen rights (see: Charter of Fundamental Rights of the EU).
- Principles: Beneficence (Do good); Non-maleficence (Do no harm); Autonomy (Preserve human agency); Justice (Be fair); and Explicability (Operate transparency) (from: AI4People — An Ethical Framework for a Good AI Society); the last one, Explicability, is relatively new and specific for AI.
- Values: The Guidelines “do not aim to provide yet another list of core values”–since there are many useful lists available, like the lists from Asilomar, Montreal, IEEE and EGE (these lists are reviewed in AI4People — An Ethical Framework for a Good AI Society).
- Requirements: accountability; data governance; design for all; governance of AI autonomy (human oversight); non-discrimination; respect for (and enhancement of) human autonomy; respect for privacy; robustness; safety; and transparency.
They structure their guidance in three parts, from abstract to practical: Guidance for ensuring ethical purpose; Guidance for realizing trustworthy AI; and Guidance for assessing trustworthy AI. Such a structure is very useful, and much needed, during the design process (purpose), implementation process (realizing) and evaluation process (assessing). We need to move from abstract to practical, and back, in an iterative fashion — indeed, in iterative cycles.
I think this is brilliant: to introduce ‘Trustworthy’ as an overarching term; to explain the relationships between rights, principles, values, and requirements; and to provide guidance for iterating between design, implementation, and evaluation.
Now, following the AI HLEG’s invitations to provide feedback, here are some concerns and thoughts for further improving these guidelines:
Concern for Human Dignity
The AI HLEG asked for feedback on “Critical concerns raised by AI” (pp. 10–13). I would like to propose to add one concern: concern for human dignity.
What do I mean by that? Well, you are familiar with the Turing Test. It aims to evaluate whether a computer can give a performance that we recognize as human-like intelligence so that we cannot distinguish it from a human. In a Turing Test the computer’s aim is to behave like an intelligent person.
Now imagine a Reverse Turing Test. In such a test you, as a human being, aim to adapt to the computer and its algorithms. You fix your eyes on your mobile phone’s screen and you mindlessly click ‘okay’, ‘view next’, ‘buy’–you do whatever the algorithm tells you to do. In a Reverse Turing Test, your aim is to behave like a machine.
This concern is related to other concerns discussed by the AI HLEG: for ‘Identification without Consent’ (when you mindlessly click ‘yes, I accept terms and conditions’), for ‘covert AI systems’ (when a system treats you in a mechanical manner, with machine logic), and for ‘Normative and Mass Citizen Scoring’ (when a system gathers all sorts of personal data and uses these for all sorts of purposes, in non-transparent ways).
Implementing too many AI systems, in too many spheres of life, and using these too much, is a threat to human dignity.
This concern was discussed, e.g., by Brett Frischmann and Evan Selinger (Re-engineering Humanity, 2018: 175–183; I took the idea for a Reverse Turing Test from them), by Sherry Turkle, who reminded us of the value of genuine human contact, both intra-personal and interpersonal (Reclaiming Conversation, 2015), and by John Havens (Heartificial Intelligence, 2016), who advocated “embracing our humanity to maximize machines”: to design and use machines in ways that preserve and support human dignity.
Putting Human Agency First
Furthermore, I’d like to propose an improvement and clarification in the formulation of two of the ‘Requirements of Trustworthy AI’ (pp. 13–18).
The AI HLEG discusses “Governance of AI Autonomy (Human oversight)” and “Respect for (& Enhancement of) Human Autonomy”. My proposal is to merge these requirements into one requirement, under the heading of, e.g., “Appropriate Allocation of Agency”, or: “Putting Human Agency First”.
Both requirements (“Governance of AI Autonomy” and “Respect for Human Autonomy”) are about distributing agency between people and an AI system. Put simplistically:
- Moral agency resides in people, not in machines;
- there are only 100 agency-percent-points to share (as it were);
- and you can delegate some agency-points to a machine;
- but then you will lose these (like in a zero-sum game).
The agency of humans and the agency of an AI system are on one and the same axis: on one side of this axis people have 90% of the autonomy and the AI system 10%; on the other side the AI system has 90% of the autonomy and people 10%. The choice is ours — and we will need to decide carefully, taking into account the various pros and cons of delegating agency to machines.
Merging these two requirements about autonomy is intended to clarify that human agency diminishes when we delegate agency to machines.
Underlying this intention is the belief that technology must never replace people or corrode human dignity. Rather, we need to put human agency first, and use technologies as tools. Here it needs to be acknowledged that tools are never neutral; the usage of any tool shapes the human experience and indeed the human condition (https://ppverbeek.wordpress.com/mediation-theory/) — this requires careful decision making, e.g., in the ways in which an AI-tool gathers data, presents or visualizes conclusions, provides suggestions, etc.
This idea is at the heart of the Capability Approach, which views technologies as tools to extend human capabilities: to create a just society in which people can flourish (see: Organizing Design-for-Wellbeing projects: Using the Capability Approach; copy for personal, academic use). This idea is also expressed in the “Statement on Artificial Intelligence, Robotics, and ‘Autonomous’ Systems” of the European Group on Ethics in Science and New Technologies, in which ‘Autonomous’ has quotation marks to indicate that a system cannot have moral autonomy. Finally, the principle of “appropriate allocation of function between users and technology” is explicitly mentioned as a principle in the ISO 13407:1999 standard for Human-centred design processes for interactive systems (the updated ISO 9241–210:2010 standard puts this less explicitly).
Virtue Ethics for Human Flourishing
Moreover, the AI HLEG invites suggestions for technical or non-technical methods to achieve Trustworthy AI (p. 22). In line with the suggestions above (a concern for human dignity; and putting human agency first), I'd like to propose to add virtue ethics to the mix of non-technical methods.
In her book “Technology and the Virtues” (2016), Shannon Vallor advocated developing and using technologies in ways that promote human flourishing. She views technologies as tools that can help — or hinder — people to cultivate specific virtues. She argues that we need to cultivate specific technomoral virtues to guide the development and the usage of technologies, so that we can create societies in which people can flourish in the 21st century.
Please note that each society, for each specific era and area, needs to make its own list of virtues that are needed for that society. The virtues that Aristotle proposed were for the citizens of ancient Athens. The virtues of Thomas of Aquinas were for medieval catholic people. Vallor proposed the following virtues for our current global, technosocial context (op.cit.: 118–155):
Honesty (Respecting Truth), Self-control (Becoming the Author of Our Desires), Humility (Knowing What We Do Not Know), Justice (Upholding Rightness), Courage (Intelligent Fear and Hope), Empathy (Compassionate Concern for Others), Care (Loving Service to Others), Civility (Making Common Cause), Flexibility (Skillful Adaptation to Change), Perspective (Holding on to the Moral Whole), and Magnanimity (Moral Leadership and Nobility of Spirit).
Vallor argued that virtue ethics is an especially useful approach for discussing the development and usage of emerging technologies (op.cit.: 17–34): technologies that are under development and not yet crystallized. AI is an example of an emerging technology. Emerging technologies entail what Vallor calls “technosocial opacity” (op.cit.: 1–13); their usage, integration into practices, effects on stakeholders, and place in society are not yet clear. She argues that other well-known ethical traditions, like deontology or consequentialism, can have limitations when used for the development and usage of emerging technologies. In deontology, one aims to find general rules and duties that are universally applicable. In consequentialism, one aims to maximize positive effects and minimize negative effects for all stakeholders. For an emerging technology like AI, however, it is hard to find general rules and duties, or to calculate all possible effects for all stakeholders (op.cit.: 7–8).
Take, for example, autonomous cars — with lots of AI in them, and in the infrastructure around the cars. Yes, there are some cars driving around with some level of autonomy. But they are not fully autonomous and they are not widely used. Therefore we cannot yet have a good-enough understanding of the ways in which people use autonomous cars and of their place in society.
Autonomous cars may, e.g., incentivize people to make longer commutes: to travel 4 hours in the early morning (while sleeping behind the wheel) and travel 4 hours in the evening (while watching videos). This could disrupt family lives, corrode leisure time, social interactions and the social fabric of society, and have huge negative impacts on the environment — and on traffic congestion.
For such a case, it would be hard to know exactly which duties are involved or which general rules apply. Or it would be hard to anticipate and calculate all the positive and negative consequences for all stakeholders involved. A virtue ethics approach, however, would be useful here: to identify the virtues that are relevant in this specific case (to create a society in which people can flourish), and to provide recommendations to cultivate these virtues, including processes of self-examination and self-direction (op.cit.: 61–117).
Rather than putting different approaches in opposition to each other, to disqualify one, or to favour one at the expense of another, I'd like to propose to create a productive combination: to use deontology where and when we have clarity about general rules and duties; to use consequentialism where and when we are able to calculate positive and negative consequences; to use virtue ethics where we ask questions about what kind of society we want to create and how technology can support people's flourishing.
It is my hope that these three suggestions–a concern for human dignity; putting human agency first, and applying virtue ethics–can help to further develop these Ethics Guidelines for Trustworthy AI.
Marc Steen (marcsteen.nl; marc.steen@tno.nl)