What is Bias and How Can it be Mitigated?

Holistic AI
Holistic AI Publication
4 min readDec 14, 2022
What is bias?

This article was originally posted on the Holistic AI blog, where you can find all our latest articles on AI risk management and AI regulations.

Bias refers to unjustified differences in outcomes for different subgroups. To contextualise this, bias in recruitment could take the form of white candidates being hired at a greater rate than non-white when race is not related to job requirements, and bias in credit scoring could result in males being given a higher score than females when factors such as payment defaults and education are controlled for.

As humans, we can be biased whether we are aware of it or not. Unconscious bias refers to implicit associations that we are not aware of that cause us to favour one group over another. For example, a candidate’s name might influence a hiring manager’s opinions about an applicant, even if they are not actively seeking out a hire with specific characteristics. We can also be consciously biased, making decisions based on protected characteristics rather than merit. For example, a hiring manager might actively seek a male applicant for a leadership position.

Algorithms can also be biased, but not in the same way as humans. Algorithms identify patterns in data, even if these patterns are not intuitive or recognised by humans. Because of this, algorithms can reflect and amplify biases in the data they were trained on. This means that they can display the same biases as humans, but not for the same reasons.‍

Sources of bias in algorithms

Using algorithms to make decisions can introduce unique sources of bias. Some of these sources are:

  • Human biases — if algorithms are trained based on prior human decisions, and these decisions are biased, then the algorithm will reflect these human biases.
  • Unbalanced training data — if the algorithm is trained on data that is dominated by particular subgroups, the algorithm can be biased against the subgroups that are underrepresented.
  • Differential feature use — if an algorithm uses different features to evaluate the performance of different groups, the algorithm itself can be said to be biased and can result in biased outcomes.
  • Proxy variables — even if protected attributes are not used by an algorithm to make a decision, proxy variables can represent these characteristics. For example, zip code can be used as a proxy for race.‍

Bias mitigation strategies

The appropriate mitigation strategies depend on the source of bias, and they often require technical expertise to implement. However, some ways to mitigate bias are:

  • Obtaining additional data — create a more balanced dataset or gather data from multiple sources to reduce the effect of imbalanced data or human biases
  • Adjusting the hyperparameters of the model — introduce or increase the regularisation of the model to change the way the model fits the data and reduce bias
  • Removing or reweighing features — if a feature has a high correlation with a protected attribute, this can mean that it is acting as a proxy variable. Removing such features or reweighing them to have a smaller influence can help to mitigate bias‍

Bias Audits

Bias audits of automated employment decision tools will soon be required under legislation passed by the New York City Council, meaning that any employer using an automated decision tool to evaluate a candidate residing in New York City must commission an audit. These audits must be carried out by an impartial third-party auditor who has the relevant expertise to enable them to examine the algorithm and its outputs for bias against protected groups.

While NYC is the first jurisdiction to mandate bias audits and they are only in relation to automated decision tools used in the context of recruitment, legislation in Colorado prohibits insurance providers from using data and algorithms that result in unjustified discrimination, or bias, against protected groups. Bias audits can also contribute to the risk management of algorithmic systems, which is particularly important for systems that are considered high risk since they will be required to have risk management strategies under the forthcoming EU AI Act. Opting to get bias audits and implementing risk management strategies for your AI systems can empower you to adopt AI with confidence.

About Holistic AI

Holistic AI is an AI risk management company that aims to empower enterprises to adopt and scale AI confidently. The AI risk management software platform audits and assures AI systems’ code, data, policies, and processes. As a result, enterprises can maximise their AI’s value, minimise reputational, legal and commercial risks, and accelerate innovation.

We have pioneered the field of AI risk management and have deep practical experience auditing AI systems, having reviewed over 100+ enterprise AI projects covering 20k+ different algorithms. Our clients and partners include Fortune 500 corporations, SMEs, governments and regulators.

--

--