Identifying Bias in AI

Shadeer Sadikeen
2 min readNov 8, 2023

--

The main villain of the data is bias. This makes our model and process even worse.

Machine learning (ML) has the potential to improve lives, but it can also be a source of harm, as it can exhibit various types of biases.

There a 6 types of biases.

These biases can lead to unfair and discriminatory outcomes, affecting individuals based on race, sex, religion, and other categories.

In this tutorial, we will cover the biases that can affect ML applications, including:

  1. Historical Bias: This occurs when the training data contains biased information, leading the ML model to learn and reproduce the same biases.
  2. Representation Bias: This happens when a group is underrepresented in the training data, causing the model to perform poorly for that group.
  3. Measurement Bias: This arises from the way in which data is collected or measured, leading to inaccurate or skewed results.
  4. Algorithmic Bias: This occurs when there is a problem within the algorithm itself, causing it to produce systematically erroneous results.
  5. Automation Bias: This is a tendency to favor results generated by automated systems over those generated by non-automated systems, irrespective of their error rates.
  6. Group Attribution Bias: This is a tendency to generalize what is true of individuals to an entire group to which they belong, leading to unfair stereotyping and assumptions.

By understanding and addressing these biases, we can work towards creating fairer and more accurate ML applications that do not discriminate against or harm individuals based on their background or other factors.

--

--