Algorithmic Fairness — a question of ethics or law?

Emma Day
Tech Legality
Published in
6 min readOct 20, 2023

This week our reading group topic was algorithmic decision-making and fairness.

This is the second in a series of blogs coming out of the Tech Legality reading group which is following the Stanford Course on Ethics, Technology, and Public Policy for Practitioners, through a human rights lens.

Photo by Google DeepMind on Unsplash

Our group’s human rights lens led us to enquire, how is fairness and discrimination defined within ethics discourse, as opposed to human rights law?

I have come across questions about the fairness of algorithmic decision-making in my own work at Tech Legality a lot recently, across a range of topics. The question has usually been whether certain kinds of ‘public interest technology’, created to solve social problems, actually achieve their stated purpose, and don’t cause other social problems. Examples include different tech tools created to monitor for hate speech and disinformation; or to combat child sex abuse materials and grooming; or to carry out age estimations based on children’s faces. Each of these examples encompasses many different kinds of tech tools and could easily take us down a rabbit hole of its own, which I will leave for another day.

Our reading group this week was focused on two key readings. The first was a Harvard Business Law Review Article from 2016 entitled ‘A Guide to Solving Social Problems with Machine Learning’ (an impressive claim for a 19 minute read), which asks how to maximise the benefits of using AI for social problems, while minimising the harm. The article discusses how machine learning should or should not be used within the US criminal justice system.

However, in this blog I want to focus on the second reading, which was the 2018 Algorithmic Accountability Primer from Data & Society which starts with a discussion about how to define an algorithm. Quoting from Cathy O’Neil the primer proposes that an algorithm is “an opinion embedded in mathematics”. It goes on to state that “when an algorithm’s output results in unfairness, we refer to it as bias”.

(All ideas expressed in this blog are my own, and others in the reading group may well have different views. While Tech Legality is facilitating this group, the group members are not affiliated with us)

Our reading group — left to right from top Valentina Vivallo, Alexander Laufer, Hannah Bagdasar, Kruakae Pothong, Andrea Olivares Jones, Lan Shiow Tsai, Allan Maleche, Veronique Lerch, Gemma Brown, Louise Hooper, Ananya Ramani, Stephanie Haven, Stacey Cram, Rachel Chambers, Paul Roberts, Elena Abrusci, Daisy Johnson, Sabine K Witting, Fiona Iliff, Clare Daly, Laura Berton, Ayca Atabey, Lama Almoayed, Esteban Ponce de Leon, Mark Leiser, Emma Day, Trisha Ray, Jean Le Roux, Nezir Akyesilmen.

The Primer uses an example from the US criminal justice system to illustrate the meaning of fairness and bias. Northpointe’s COMPAS tool was designed to calculate sentencing in criminal cases for judges. Northpointe’s own team of data scientists showed that their system met a commonly accepted definition of fairness within the field of statistics, because for defendants of different races, it correctly predicted recidivism at around the same rate and therefore was statistically accurate. However, investigative journalists from ProPublica also analysed the COMPAS system, and they found that although the system predicted recidivism equally well for different races, it was more likely to mistakenly predict that black defendants were high-risk, and to predict the opposite was true for white defendants, resulting in what ProPublica found to be a biased and unfair outcome. The Primer highlights that the problem is that there is no standard definition for algorithmic bias or of fairness within the US legal system, and companies selling and operating such AI based tools are not accountable to anyone for the definitions they choose to use, and neither are the US courts which use these companies’ tools.

Much of the debate around AI Ethics emanates from the United States, where international human rights principles and norms are not all incorporated into domestic laws, and there is no national human rights law. The United States recognises civil rights, but rarely refers to the full range of human rights set out under international law. In this context is easy to see why there is a need to create a field of ethics to help to define what fairness should mean in different contexts, and how to apply this in practice.

Non-discrimination and fairness under international human rights law

However, in many other parts of the world, international, regional, and national human rights laws apply alongside newer laws and regulations introduced to regulate the internet and digital technologies.

For example, in relation to non-discrimination and fairness, the Universal Declaration of Human Rights provides that all of the rights and freedoms contained in the Declaration apply to everyone without distinction (i.e. discrimination) on the grounds of race, colour, sex, language, religion, political or other opinion, national or social origin, property, birth or other status (Article 2). Further, all are equal before the law and are entitled without any discrimination to equal protection of the law. All are entitled to equal protection against any discrimination in violation of the Declaration and against any incitement to such discrimination (Article 7).

Discrimination and fairness under EU law

At an EU level the AI fairness definitional problem may be easier to solve, especially when the EU AI Act eventually comes into force. The European Commission started out with a soft-law approach, to AI, publishing its non-binding 2019 Ethics Guidelines for AI, but has since shifted towards a legislative approach through the EU AI Act.

Because the EU AI Act is part of the body of European law, it makes reference to the EU Charter on Fundamental Rights, and consequently considers discrimination, fairness and bias in relation to outcomes that have adverse impacts on human rights, or result in discrimination. Fairness itself is not defined in the EU AI Act, apart from in a few mentions of the right to a fair trial. The EU AI Act takes the approach of assessing risks to fundamental rights, and outcomes that have adverse impacts on human rights, rather than focusing on fairness.

Article 21 of the EU Charter of Fundamental Rights prohibits any discrimination on the grounds of sex, race, colour, ethnic or social origin, genetic features, language, religion or belief, political or any other opinion, membership of a national minority, property, birth, disability, age or sexual orientation.

In addition, the EU AI Act sits alongside the GDPR, which states at Article 5(1)(a) that personal data shall be processed, lawfully, fairly and in a transparent manner in relation to the data subject (‘lawfulness, fairness and transparency’). AI is, after all, powered by data, and therefore the GDPR also applies where personal data is involved.

The European Data Protection Board (EDPB) has defined fairness in the context of data protection by design and default: “fairness is an overarching principle which requires that personal data should not be processed in a way that is unjustifiably detrimental, unlawfully discriminatory, unexpected or misleading to the data subject

The EDPB highlights the importance of taking into account certain key elements in the practical implementation of fairness. These include, amongst other things:

· respect for the fundamental rights of data subjects

· the autonomy of data subjects

· non-discrimination

· non-exploitation

· the data subject’s reasonable expectations

· avoidance of deception

· power balance

· truthful processing.

This list (explained in detail in the EDPB Guidance) would form a very solid basis for assessing the fairness of AI systems, even where the impacts are not on the data subject on whose personal data they were trained, but on wider society.

The EDPB analysed several of these elements of fairness in detail in their recent decision against TikTok in which they found the company to have infringed the fairness principle under the GDPR with regard to certain TikTok platform settings, including public-by-default settings as well as the settings associated with the ‘Family Pairing’ feature, leading the Irish Data Protection Commission to impose a fine of 345 million Euros on TikTok.

Fairness then, at least within the EU, is not purely a matter of ethics, but also a question of law.

If you are interested in exploring the concept of fairness more, Ayca Atabey, who is a member of this reading group, has done extensive work on the meaning of fairness by design for children.

If you are interested in how the European Convention on Human Rights has so far been applied, to various cases involving AI that have come to court, and could be applied to future cases, Louise Hooper, also in our group, has a book chapter coming out addressing exactly this very soon.

The Data & Society Primer also discussed accountability of AI systems, including risk assessments and audits. That will be the subject of another blog in this series. Coming soon.

#EthicsTechPolicy #ResponsibleAI #EthicsandTech #HumanRightsandTech @Tech Legality

--

--

Emma Day
Tech Legality

Human rights lawyer, specialist in law and technology, and co-founder of Tech Legality. UK raised, worked for 20 years in Asia, Africa, and North America.