Different Types of Bias found in Artificial Intelligence Algorithms

Jeffery Recker
4 min readJan 17, 2023

--

Many forms of “Algorithmic Bias” can appear in the results of artificial intelligence and autonomous systems. Here are 13 examples of Algorithmic Bias that can be found.

Artificial intelligence and machine learning algorithms for automated decision-making can result in unintended discriminatory treatment of various groups of people because of the presence of algorithmic bias. These biases can be produced and realized at different stages of an algorithm’s life cycle, from the production to the training and deployment phases.

As someone who has worked in algorithmic auditing for over half a decade, I have come across many forms of bias that can come up in the results of artificial intelligence algorithms. People need to understand these biases to prevent potential harm to themselves and others. The list below is in no particular order and serves only as a guide to understanding and identifying potential biases in an algorithm. The examples listed are based on real examples easily found with a google search, based on personal experience from my time working in the field of Responsible AI, and hypothetical scenarios created for this article.

  1. Gender bias — is when one sex is favored over another sex or is not represented in the result at all, as in the case of transgender people. A well-known example of this would be a hiring algorithm that favors men over women for certain kinds of roles.
  2. Racial bias — is when certain racial groups are favored over other racial groups. For example, this could come from algorithms used in the criminal justice system to determine the likelihood of repeat offenders of a crime and the algorithm choosing one racial group over another simply because of their race.
  3. Age bias — is when groups of people from certain age groups are favored over another age group. This can be a byproduct of data representation, as older generations might be producing a different amount of information needed to represent them compared to younger generations equally.
  4. Socio-economic bias — is when people of economic advantage in life are represented over people in less fortunate situations. Some examples of this would be people who played varsity sports in high school, exam scores like the ACT or SAT, using grade point averages from college, and participating in student organizations.
  5. Confirmation bias — is when an algorithm produces a feedback loop from its previous results.
  6. Representation bias — is when a group of people is not represented in the algorithm’s data. An example of this would be indigenous people not being represented to the same degree as other groups of people in various data sets training artificial intelligence algorithms.
  7. Concept Drift — is when an algorithmic model does not adjust to unforeseen events someone might experience. An example of this would be an algorithm that tries to predict an individual’s purchasing decisions but fails to change after they move to a new city properly. That person gets advertised stores from the city they previously lived in rather than the city they currently live in.
  8. Privacy bias — is when a group of people is not represented in the data of an automated system because they chose not to have their data collected. An example of this would be a social media platform not collecting data from a group of participants and presenting different results on their platform as a consequence.
  9. Disability bias — is when an algorithm fails users with mental or physical disabilities, such as stroke victims, people with autism, and people suffering from Post Traumatic Stress Syndrome. An example of this would be a facial recognition algorithm attempting to read human emotions and failing to recognize someone who struggles to emote various feelings expressively.
  10. Language bias — occurs when an algorithm fails to treat people with a different understanding of the native language of the algorithm the same as other native speakers. An example of this would be an algorithm grading college papers on a subject, failing to fully understand what is being said by a non-native writer and unfairly punishing them.
  11. Regional bias — is when certain groups are not accurately represented in an algorithm’s data. This could be someone’s zip code, as in the United States, or certain nations needing to be accurately represented in the data. An example of this would be an algorithm predicting whether or not someone should be approved for a loan based on historical data, which fails to understand the historical context of why certain people live where they live and ultimately punishes those people as a product of that.
  12. Recency bias — is when an algorithm prefers more recent data over historical data. An example of this would be an algorithm that tries to predict trends in financial markets and prefers current live data over the historical data of those same markets.
  13. Culture bias — is when an algorithm prefers one culture over another culture. An example of this would be facial recognition failing to identify someone who uses hairstyles more representative of their cultural heritage over commonly used hairstyles more representative of the algorithm’s training data.

Biases can be found in many different forms and all kinds of algorithms. The best way to reduce the risk associated with algorithmic bias is to understand the various and distinct types of bias that can occur, proactively think about how an algorithm could potentially harm someone, and then put measures in place to prevent them.

--

--

Jeffery Recker

I am the COO of BABL AI, a company that audits AI algorithms for ethical bias and legal compliance. Follow me on LinkedIn www.linkedin.com/in/jeffery-recker