Algorithmic Fairness

Baiwu Zhang
BMO AI
Published in
5 min readDec 9, 2020
Source: https://ai.ku.dk/events/algorithmic-fairness/

As Artificial Intelligence (AI) technology advances in its capability and cases in use, machine-made decisions have become important parts of our daily life. While enjoying the efficiency and convenience enabled by AI, we nowadays must start facing and addressing challenges it brings as well. An important task is to make sure decisions made by AI or Machine Learning (ML) algorithms are non-discriminatory and fair to everyone. For example, Buolamwini et al. (2018) demonstrated that facial analysis systems could significantly discriminate based on race and gender. Despite still being a rising issue to our society, significant progress has been made towards algorithmic fairness in academia over the past decade, with open-source solutions being built. In this blog post, I will introduce one common methodology for tackling the fairness problem in AI. I will also provide a few thoughts as a practitioner of developing fair AI algorithms.

The importance of fairness in AI

As stated in the introduction, the increasingly large-scale adoption of AI systems to make automated decisions that have a real effect on our lives has prompted people to rethink the potential risks that AI systems may introduce. Due to AI’s potential to uncover hidden information, human biases could be reflected and even dangerously amplified by AI systems. Media has reported that AI systems may make decisions that are discriminative towards minorities, historically disadvantaged groups, and other protected groups.

Financial industries, which are strictly regulated, require more serious considerations of the fairness of AI algorithms. In the US, the Fair Housing Act (FHA) and the Equal Credit Opportunity Act(ECOA) explicitly prohibit unfair and discriminative practices when offering credit products to clients. Under such laws, it is illegal to even collect clients’ memberships in protected classes. For example, credit card companies must explain that the way they are making decisions is not racially discriminative, yet are not allowed to ask applicants what racial groups they are in. Even so, many proxies for this information exist in datasets across the industry. AI may implicitly learn to discriminate inappropriately based on hints that exist in the data without ever being trained explicitly on attributes such as gender or ethnicity.

With the ambition to embark on a digitally-enabled future and leverage advanced AI technologies, financial institutions are highly motivated to prioritize AI fairness as part of their trustworthy AI initiatives.

How to build fair AI algorithms

The common framework to approach AI fairness problems contains two consecutive steps: first measurement and then mitigation.

Measurement

Measurement is the process to specifically identify the severity and location of unfairness in an algorithm, against specified fairness metrics. Each fairness metric represents a distinct definition of fairness. In this blog, I’ll focus on two definitions: demographic parity and equalized odds, both are instantiations of the concept of group fairness.

Demographic Parity (DP): A predictor satisfies demographic parity if the predictor h makes decision h(X)=ŷ independently of the protected attribute A: ℙ(h(X)=ŷ)=ℙ(h(X)=ŷ|A). In the credit loan example below, Classifier 2 satisfies demographic parity as it achieves the same approval rate regardless of the underlying population group.

Classifier 2 satisfies demographic parity while maintaining the same overall approval rate as Classifier 1.

One may notice that the DP definition never touched on the true label of each sample. In the example of credit loans, whether the applicant is creditworthy or not is not considered in evaluating demographic parity fairness. The next metric will address this flaw.

Equalized odds: A predictor satisfies equalized odds if the predictor h makes decision h(X)=ŷ independently of the protected attribute A given its true label Y: ℙ(h(X)=ŷ|Y)=ℙ(h(X)=ŷ|A, Y). This definition means that for all creditworthy applicants, the probability for them to receive a loan should be the same regardless of their memberships in protected groups.

When the definition of fairness is agreed upon, measurements of fairness can then be established. Examples of common measurements include demographic parity ratio and demographic parity difference. Measurements based on mutual information have also been proposed recently.

In order to capture all sources of unfairness introduced to a system, it is often the case that a holistic measurement is performed on the end-to-end decision-making process, before a break-down view on each of the components.

Mitigation

There are many types of mitigation algorithms that aim to remove bias from AI systems. Depending on the stage of intervention in a machine learning lifecycle, bias can be mitigated during data collection, before model training, during model training, or after model training. Mehrabi et al. (2019) provide a comprehensive overview of bias removal techniques. In the following, I will introduce 3 open-source software packages that are available off-the-shelf for fairness mitigation.

Fairlearn: Started as a Microsoft project, Fairlearn implements the reduction approach to reduce bias in any ML model that is written in scikit-learn style. Fairlearn provides both in-processing and post-processing bias removal, as well as all common fairness metrics.

MinDiff: MinDiff library is a newly released ML Remediation library by Google. It offers a method that adds regularization terms during model training to impose maximum mean discrepancy loss to achieve fairness constraints.

AI Fairness 360: Developed by IBM and open-sourced, AI Fairness 360 offers various pre-processing, in-processing, and post-processing mitigation algorithms to reduce discrimination in ML models. Unlike other libraries that are written in Python, AI Fairness 360 also provides APIs in R.

A few thoughts on fair AI algorithms in the real world

In the following, I will outline a few thoughts I have gathered, during the course of building fair AI algorithms in a real-world environment.

  1. Disparate treatments and disparate impacts need to be clearly distinguished and called out. The formal refers to discriminative treatment on prohibited discriminatory factors, which is illegal. However, the unfairness we are discussing in this blog post is only on disparate impacts. It’s worth noting that there is a shift in focus recently that disparate impact is also illegal regardless of intent according to the US Supreme Court decision Texas Department of Housing and Community Affairs v. the Inclusive Communities Project.
  2. Unfairness needs to be measured end-to-end and mitigated on different levels accordingly. Any ML system in production contains more than one model and numerous data processing pipelines, where bias may be introduced at any step. A holistic view is crucial for practitioners to properly identify and prioritize areas for improvement.
  3. Appropriate language needs to be adopted when discussing fairness to avoid ambiguity in communication. For example, “bias” means very different things under statistical contexts versus societal contexts. This guideline from Microsoft provides a good framework on how to talk and write about fairness in AI.

Conclusion

In this blog post, I introduced why fairness started to gain attraction when building AI solutions, and how it could be approached in the measurement-mitigation methodology. However, we must remind ourselves that fairness in AI is never a purely technical problem, but a sociotechnical problem that requires attention from technical, societal and legal aspects.

Huge thanks to Christine Yuen, Yevgeniy Vahlis, Stella Wu, and the BMO AI Capabilities team for helping review this post.

About the author: Baiwu Zhang is an Applied AI Researcher at Bank of Montreal’s AI Capabilities team. Baiwu works on trustworthy machine learning initiatives.

--

--

Baiwu Zhang
BMO AI
Writer for

Applied AI Researcher at Bank of Montreal AI; working on trustworthy machine learning