Defining fairness

DataKind UK
DataKindUK
Published in
5 min readJul 1, 2019

By Giselle Cory, Executive Director at DataKind UK

For our upcoming ethics book club on fairness in AI, we are ‘reading’ Arvind Narayanan’s 21 definitions of fairness. The tutorial divides itself between technical definitions and the societal questions that arise from those definitions. Below is a summary of this excellent tutorial, for those without an hour to spare for the full video. This is an imperfect summary — watch here for more comprehensive explanations and sources.

Starting premise: There is no one true definition of fairness… So how can we think about fairness?

Statistical bias

Narayanan starts with the option of reducing statistical bias as a measure of fairness. For example, the now infamous COMPAS algorithm is not biased in the statistical sense when predicting the likelihood of re-arrest. But statistical bias says nothing about errors/biases in the data. These are inevitable and we need to account for them. It’s not good enough to say that our model is perfect and it’s ‘just’ the data that is broken.

Supporting human values

So let’s reframe the problem — it’s not about mathematical correctness, but instead ensuring algorithmic systems support human values. But how?

For the rest of the blog, we use a criminal justice example with the concept of an individual being labelled low or high risk of re-arrest. That label is then shown to be right or wrong.

Perspective matters

What do different stakeholders in the system want when assessing it for fairness? Narayanan looks at three perspectives:

  • Decision maker — Of those who I’ve labelled high risk of re-arrest, how many will be re-arrested? This is the “predictive value” of the algorithm
  • Defendant — What is the probability of being incorrectly classified as high risk? This is the “false positive rate”
  • Society — Is the selected set of people demographically balanced? e.g. is the proportion of people from group A who are labelled high-risk the same as the proportion from group B? This is called “demographic parity”

Different metrics matter to different groups — and as far as the maths is concerned, there is no one right answer. So what happens when you try to be fair from multiple perspectives?

The impossibility theorem

You fail, according to the impossibility theorem. It goes something like this: You have two groups, A and B. They have different re-arrest rates. If there is a metric that predicts with equal accuracy whether someone in group A will be re-arrested and someone in group B will be re-arrested, then that metric can’t have equal false positive rates (the system says you will get re-arrested but you aren’t) for both groups, nor can it have equal false negative rates (the system says you won’t get re-arrested but you are) for both groups.

Or more formally:

If an instrument satisfies predictive parity but the prevalence differs between groups, the instrument cannot achieve equal false positive and false negative rates across those groups

This is tricky for us data scientists, says Narayanan. We are used to optimising for stuff. And in this era of AI, we are still being mathematicians rather than philosophers, despite the multitudes of moral judgements implied in our models and the lack of numerical answers.

Narayanan notes that this discussion is not specific to algorithmic decision-making. For example, when the police choose whether or not to search a vehicle, they are basing that decision on their prediction on the presence of contraband in that vehicle. The difference with machine learning is that the bias is a side effect of a conscious decision to maximise accuracy — whereas human bias results from our complex, flawed decision making.

But how about blind models?

What about the presence of protected characteristics (like race) in our training data? Can we make our models fairer by stripping them out? Narayanan points to research showing that blindness (not looking at the protected attribute) is not an effective way of ensuring fairness in machine learning, because ML is very good at picking up on proxies in the data.

And what about…..?

Narayanan then takes us on a tour of many other considerations of fairness and the questions that bubble up from them:

Unacknowledged affirmative action

If the prevalence of a particular metric differs between groups, that may be due to measurement bias (e.g. measuring re-arrests rather than recidivism). If so, we might want to correct for this algorithmically. But, it may instead be due to historical prejudice. Correcting for this can be seen as affirmative action. How do you draw the line between affirmative action and fairness?

Transparency

In assessing concepts of individual fairness (rather than group fairness), the idea is that similar individuals should be treated similarly, with their similarity assess with respect to the decision making task at hand. This assessment of similarity should be transparent, available for users to know and understand. This is a far cry from the obscurity we have at present. How to we get organisations to do open up?

Representational harms

When looking at fairness as an attempt to reduce or not cause harm, we need to define harm. Kate Crawford makes a distinction between allocative harms when the system withholds opportunity (e.g. hiring), which operate through discrete transactions and have immediate effects, and representational harms, when the system reinforces subordination of a group (e.g. stereotyping), causing diffuse harms with long term effects.

Cross-dataset generalisation

Datasets used for different computer vision training tasks are different from each other. How about if we had cross-dataset generalisation that allowed us to see how well a model built on one dataset generalises to another dataset. This could be used to test the demographic representativeness of datasets.

Conclusion

There are a multiplicity of definitions of fairness because there are a multiplicity of contexts, applications and stakeholders to which the concept of fairness must be considered and applied.

The impossibility theorem tells us that “any overarching definitions will inevitably be vacuous”. As Narayanan concludes,

“Our goal is instead to build algorithmic systems that further human values, which can’t be reduced to a formula.”

Takeaways for ethics scholars

Perhaps the most important element of the tutorial is the questions it leaves us with questions for mulling:

  1. How can we connect trolley problems (prototypical thought experiments) to theories of fairness and justice? E.g. how does the purpose of punishment (rehabilitation, deterrence, etc.) affect which metrics we should care most about?
  2. What should technologists be working on, and what responsibilities should technologists have? E.g. should there be a requirement for bias assessment when creating/using training datasets?
  3. What are the implications for society? E.g. how should our institutions adapt to the rise of algorithmic decision making?

Go!

- — — — — — — —

This blog is an incomplete overview of a tutorial that is itself an incomplete overview of a tricky topic. Please Arvind Narayanan’s tutorial, 21 definitions of fairness, for more detail — and don’t stop there!

Header image by Tim Mossholder on Unsplash

--

--