Your Machine’s Perception of Fairness — A.I Ethics II.

Fairness

Victor Olufemi
6 min readNov 5, 2022

Author: Olufemi Victor Tolulope.

For a long time, Computer Scientists tried to conceptualize algorithm fairness into a single definition, while some gave up, some others concluded that it is mathematically impossible to meet all the criteria without some tradeoff (Kleinberg, Mullainathan, & Raghavan, 2016). Researchers and psychologists haven’t made it easier too with the diverse definitions of fairness (Ebert, 2020). The truth however remains that Fairness is a complex topic and cannot be put into one single definition or extrapolated into a single formula. (Gregory, 2020)

Figure 1 Photo by Owen Beard on Unsplash

In the past, Computer Scientists believed optimizing the loss or accuracy to be the main goal hence the use of statistical metrics as the major score of the performance of an algorithm. They focused more on statistical bias — dwelling on the difference between the true value and the estimated value. The challenge with statistical bias is that it doesn’t account for errors or the distribution of these errors, bias has a net direction and magnitude, meaning that averaging a large number of observations does not eliminate its effect so there’s no point in increasing the sample size. And if the bias is large enough, it can invalidate any conclusions drawn from the data.

Figure 2 Equality vs Equity (Maguire, 2016)

We could keep rambling on and on about the failure of statistical algorithms in modeling fairness & detecting bias. In my own opinion, I feel we should stop the argument on mathematical correctness and instead look inward at how these algorithms can support human values. In the advent of A.I and Machine learning, where our day-to-day lives are directly influenced by these algorithms, it is important to understand fairness from the machine’s perspective. We must have in mind that Fairness is a socio-technical challenge, and that quantitative fairness metrics do not capture many aspects of fairness, such as justice and due process. (Microsoft, 2022)

Bias in deployed machine learning models is a side effect of maximizing a metric. We have discovered that machine learning models are good at picking up proxies, this means that removing gender or race from your dataset before building the model does not necessarily solve the bias problem as the model can still infer the stratification from other correlated features. (Hardt, Price, & Srebro, 2016)

Figure 3 Image by Gerd Altmann from Pixabay

Another pertinent problem is the feedback loop. Algorithms that influence decisions may in turn create a feedback loop. For instance, if an algorithm used in security predicts an area as a crime hotspot, more police will be sent there, and more arrests will be made compared to other areas. More arrests would then mean that the place is a hotspot making the stakeholder send even more police.

Bias in our data translates to become bias in our algorithm which in turn affects society. Faceapp was accused in 2017 of having a racist feature on the platform called “hot” which lightened faces. They apologized, claiming it was an unintended behavior and attributed it to have been a side effect of the bias in the training dataset. (The Guardian, 2017)

The early bias in google translate was quickly addressed, before now, when you translated “she is a doctor” to Turkish and back to English, you’ll get “He is a doctor” as the final result. Google has now innovated a solution to give both masculine and feminine results. Even the Google search engine in the past only displayed images of men when you searched CEO.

The gender shades paper by Joy Buolamwini and Timnit Gebru showed a reflection of bias in the Facial recognition system of top players like Amazon, Google, and Microsoft. They also showed how the public IJB-A dataset many folks used to build facial recognition software contained only 4.4% of dark-skinned women according to the gender shades paper. (Buolamwini & Gebru, 2018)

Figure 4 Metrics from the gender shades paper (Buolamwini & Gebru, 2018).

Bias in algorithms is often unintended, as they come from bias in training datasets which usually is a reflection of society as it stands. But do we want these machines and the algorithms we build to reflect these stereotypes? Causing representational harm that reinforces the subordination of some groups? Some folks might be wondering that if humans are biased, then why are we concerned about the machine’s perception? I will highlight a few reasons.

  1. Machine learning amplifies Bias: since the goal is to optimize a loss function, machine learning algorithms amplify stereotypes in a bid to achieve the best-optimized metric
  2. Algorithms and humans are used differently — there is a general belief that computers are not biased. Most times people tend to believe results from machines as an absolute truth. It would be terrific to have such confidence in a biased algorithm.
  3. Technology is power — and with great power comes great responsibility
  4. Machine learning can create feedback loops

Cathy O’Neil in Weapons of Math Destruction — “The privileged are processed by people, the poor are processed by algorithms”

Bias in data is inevitable, however, there are tons of ways we can contribute to building more ethical algorithms. The datasheets for dataset paper clearly explain the need for a datasheet when a dataset is curated to help researchers and practitioners understand the nature of a dataset before going further to build on it. (Gebru, et al., 2018)

What can we do to contribute? — individuals, engineers, researchers, and even domain experts can contribute to building ethical algorithms.

  1. Connect the simplest problems to fairness and justice — Analyzing a project at work/school would help broaden your understanding of the complexity of fairness.
  2. Guide technologist’s work — as a researcher or domain expert you might have a wider view of the harm, so speaking up will help guide ethical development.
  3. Examine implications for society — ask questions with the society in view
  4. Work with domain experts — as a technologist, work with researchers and domain experts in the field to have a better understanding of the challenges.
  5. Increase diversity in your workplace
  6. Advocate for good policy
  7. Be on the ongoing lookout for bias

Questions to ask- Asking good questions about a project will help structure the thought process of the team, ensuring ethical development.

  1. Should we be doing this?
  2. What bias is in the dataset?
  3. Can the code and data be audited?
  4. What are the error rates for different subgroups?
  5. What is the accuracy of a simple rule-based alternative?
  6. What processes are in place to handle appeals or mistakes?
  7. How diverse is the team that built it
Figure 5 Photo by Louise Viallesoubranne on Unsplash

In conclusion, Fairness is a somewhat complex term with a wide range of definitions. Its definition depends on context, application, and even the stakeholder. The goal however should remain building algorithmic systems that further human values — Which cannot be reduced to a formula.

--

--