How Europe can lead the way to tackle automated discrimination

Frederike Kaltheuner
9 min readJan 3, 2020

--

Extended, written version of comments delivered at the European Parliament on December 12, 2019

Watch the full hearing here: https://www.greens-efa.eu/en/article/event/automated-discrimination/

First of all, I would like to thank the Digital Working Group of the Greens/EFA, specifically MEPs Alexandra Geese, Patrick Breyer, Marcel Kolaja, Kim van Sparrentak, Sergey Lagodinsky and Damian Boeselager for holding a hearing on this important and timely issue and for inviting me to speak here today. I would also like to thank the previous panel for doing such an excellent job at explaining bias and societal imbalances in algorithmic systems.

Part I — making sure that technology works for all

I’m very glad that you are holding this event, since discrimination is such an important risk in relation to algorithmic decision-making — but as important as it is, it is also just one of many potential risks associated with automated decision-making (and AI more broadly). The deployment of live facial recognition in public spaces is a perfect example in this regard. Its use threatens a whole range of rights, from freedom of assembly and association to the right to privacy — even in the absence of bias, unfairness or discrimination in the system itself. If the system also turns out to be discriminatory, it places an additional and disproportionate burden on those who are already marginalised.

The second point I would like to raise almost seems too self-evident to make, but I would like to make it nonetheless: from structural racism to persistent sexism, the world we live in is is characterised by all sorts of discrimination, biases, and unfairnesses. In fact, the European parliament and the EU institutions, in which migrants, minorities and people of colour are still hugely underrepresented, are a perfect example. I’m worried that we sometimes talk about automated discrimination with a greater sense of urgency than we talk about tackling traditional discrimination. But this is not the only reason why it is crucial to stress just how prevalent and persistent discrimination is in society:

Discrimination does not just happen during automated decision-making — discrimination happens before and after a decision has been made. It shapes which products and services get built in the first place, it influences who builds them and how results are being interpreted and used. It’s also why risky applications of technology are often disproportionately used against those who are also most adversely affected by its harms.

This has profound consequences for how we should tackle new and automated forms of discrimination and stands in stark contrast to how discrimination and bias in automated decision-making systems is often portrayed, namely as bugs that need fixing. Instead, the discrimination that characterises so much of emerging technology today is often a symptom of the structural injustices that surround us — a symptom that nonetheless raises unique challenges.

And finally, echoing previous speakers, I think we do need to spend a minute to talk about language. If we want to move away from describing the problem towards developing solutions, we need to be very specific about what is that we are trying to fix.

Discrimination, bias and fairness are distinct concepts that often get blurred in discussions about automated decision-making and AI. While discrimination has a legal definition, fairness is a much broader concept, with different, and at times, incommensurable meanings, which, I believe, the next speaker will address in more detail.

Part II — existing regulation in theory and practice

In the following, I would like to briefly discuss two existing legal frameworks, namely data protection, and laws against discrimination, and address their respective strengths and weaknesses in addressing automated discrimination.

My main point is this: while both of these legal frameworks offer some protections in theory, in practice, their inherent limitations and the ways in which they are currently enforced means they are falling short of protecting people from automated discrimination.

Let’s begin with laws against discrimination. Direct and indirect discrimination is already prohibited in many treaties and constitutions, including Article 14 of the European Convention on Human Rights, which states:

“The enjoyment of the rights and freedoms set forth in this Convention shall be secured without discrimination on any ground such as sex, race, colour, language, religion, political or other opinion, national or social origin, association with a national minority, property, birth or other status.”

Similarity, non-discrimination law, in particular through the concept of indirect discrimination, prohibits many discriminatory effects of AI and automated decision-making.

In practice, however, enforcement is difficult, as those affected need to know that they have in fact been discriminated against. Furthermore, as the Council of Europe has argued in their report on Discrimination, artificial intelligence, and algorithmic decision-making, “non-discrimination law leaves gaps”, from the nuanced proportionality test required to establish that indirect discrimination has occurred, to the concept of protected characteristics, which non-discrimination laws typically focus.

Data protection law, especially after the General Data Protection Regulation (GDPR) entered in to force, also offers a number of protections against automated discrimination (I will not elaborate on these in my written statement, as I have previously published on this issue at great length here).

While provisions provide protections in theory, the situation looks very different in practice.

  • EU Member States have introduced vast, and sometimes open-ended derogations for national security or immigration purposes that render some of the most critical applications of automated decision-making outside the scope of the law (see, for instance the UK).
  • In addition, a major concern is compliance and enforcement. It is still too early to tell, but while GDPR has raised the bar, blatantly non-compliant behaviour remains widespread and rampant.
  • We are yet to see complaints and legal cases that will clarify how exactly rules on profiling and automated decision-making will be interpreted by regulators and the courts.
  • On top of this, these provisions have always been narrowly defined. They don’t capture all forms of profiling or automated decision-making but are limited to decisions that are “based purely on automated decision-making”, and those with “legal of similarly significant effects” (again, I have published about this in greater detail here).
  • Most importantly, data protection laws are designed to protect personal data. Yet, not all forms of automated decision-making that could have significant effects on people involve personal data at all. A good example in this regard is emotion detection technology.

Part III — recommendations

I would like to conclude my remarks with a number of concrete recommendations.

Recommendation 1: towards a coherent strategy

We often think about government as primarily a (potential) regulator of technology, while there are at least three distinct ways in which governments (and the public sector more broadly) can influence and shape the deployment of emerging technology in society:

  1. By creating, modifying and enforcing regulations
  2. Through public sector procurement
  3. Through funding of research and development

What I am currently observing is that Europe’s strategy on AI has been characterised by a disconnect between these three pillars.

While EU Member States and the EU Commission were busy setting up High-Level Expert Groups, and Parliamentary Committees to discuss the ethics and social implications of AI, local police forces all over Europe have rolled out and are now deploying facial recognition before the public had the chance for a proper societal debate.

While the EU wants to build trust in human-centric Artificial Intelligence, the Commission has been funding a research project (iBorderCtrl) whose aim is to screen incoming travellers at EU borders by means of AI-supported lie detector tests — an attempt that has been called out as dangerous pseudoscience by experts.

Recommendation 2: putting words to action

As mundane as it sounds, but at the very least there has to be a genuine public debate before high-risk technology is being rolled out — especially if it is done by the public sector.

A main obstacle towards that modest goal is that the public is all too often only learning about the deployment of a technology because investigative journalists or non-profits with limited resources have investigated. Again, facial recognition is a perfect example. It is only though an investigation by the Financial Times that the British public learnt London’s King’s Cross was using facial recognition to track tens of thousands of people in one of London’s busiest areas, which includes a train station, shops, office blocks and a university. Months later, it emerged that the MET police has provided the property developers with images of suspects. The project has now been scrapped, but such secrecy does not help to built trust.

A second precondition for any form of public debate is full transparency about the purposes, aims and claimed effectiveness of technology that is being deployed. Again, it is often not clear what the threshold for a successful trial would look like, or whether the deployment of tech technology would actually lead to better outcomes. The answer to this public debate might very well be that there is in fact no public support for certain kinds of applications.

If there is in fact support, the public sector should set the gold standard of ethical and responsible AI by deploying the technology with robust safeguards in place. I don’t have time to go into detail today, but some of the safeguards that have been discussed are risk assessments, registries and madatory disclosure. It might also mean strengthening the Freedom of Information Regime to allow for scrutinity after the fact.

Recommendation 2: strengthen enforcement and compliance with existing laws

Before we jump to demand new laws and regulations — a point about enforcement. As I have mentioned previously, there’s currently a massive enforcement gap when it comes to compliance with basic data protection principles.

In order to do their jobs, not just Data Protection Authorities (DPAs), but also other regulatory bodies, like consumer protection authorities, equality bodies and human rights monitoring bodies need proper funding. A main challenge is the ability to recruit and maintain staff with the necessary technical expertise. Not to single out any specific DPA here, but the ICO has introduced a fellowship program that allows them to invite technical experts to work for them for a limited period of time. This is a model that could also work for other regulatory bodies.

Another precondition of enforcement is that critical applications need to be audible and explainable. This is easier said than done, but at a bare minimum, it is unacceptable that public money is spent on automated decision-making systems whose proprietary nature means that these systems can’t be scrutinised.

Recommendation 3: close legal gaps

Even with the best enforcement, it is clear that there are gaps within current laws and their scope. Discrimination law need to be fit for purpose to protect people from new and changing forms of discrimination. And enforcement of data protection laws should clarify the status of data that are inferred, derived and predicted. Protections for automated decision making under the GDPR are currently limited to decisions that have a legal or similarly significant effects, and that are based on solely automated processing. While additional guidance has clarified that human intervention has to be meaningful and can’t be a “token gesture”, this still leaves much room for interpretation.

Ethics can obviously play a role in addressing questions and concerns that go beyond the law, but non-binding standards should not replace, or worse, pre-empt regulation. Over the past two years, we have seen that

Both risks and opportunities are often domain-specific. The very same application can have vastly different consequences when applied in a different contect. When it comes to discrimination, again, keeping in mind that this is just one of many risks associated with AI, I see an urgent need for sectoral regulation in the following domains:

  • credit and risk scoring
  • advertising
  • face recognition and face profiling

Conclusion

As I mentioned at the beginning, automated discrimination is just one of many potential risks associated with automated decision-making (and AI more broadly). The themes raised in this hearning, nonetheless highlights an important truth about emerging technology: whatever harm technology amplifies, it disproportionately affects those that are already marginalised. This is true for exploitative data practices, who affect some more than others, just as it is for the idea that complex, opaque and (sometimes deeply flawed) systems can and should be the ultimate authority for judging who people are.

Perhaps one of the most pressing tasks of this decade — next to the climate crisis and growing inequality — is to vigorously defend the rights, norms and rules that should govern powerful technologies, the companies that built them and the governments that deploy them. It’s up to us.

--

--

Frederike Kaltheuner

Holding tech to account. Mozilla Tech Policy Fellow, writer, researcher and activist