9 Ways to reduce bias in artificial intelligence algorithms

As the use of artificial intelligence rises drastically, the growing concern around “algorithmic bias” in these algorithms also rises. Here are 9 ways to reduce bias in artificial intelligence algorithms.

Jeffery Recker
4 min readJan 24, 2023
Image by mindandi on Freepik

Algorithmic bias is the concept that artificial intelligence algorithms and autonomous systems produce unintended discriminatory results that affect various groups of people. These results can come from historical biases in a society and perpetuate them through the automation of artificial intelligence. Below is a list of things companies and organizations can do to reduce the various risks associated with algorithmic bias.

  1. Ethical standards — are a critical start to ensure that an organization considers all the emerging laws and regulations and the potential risks associated with their algorithms. It is an important step in establishing a culture of proactive thinking and action related to the risks of artificial intelligence algorithms and autonomous systems, like algorithmic bias.
  2. Education — is a critical tool that can be used to mitigate the risk of algorithmic bias. By simply educating the developers to identify and measure certain well-known risks and to contextualize their algorithms in a way they might not have considered before, they can then put measures in place to safeguard against these risks. It is also important to educate an algorithm’s users and relevant stakeholders on how bias can arise. If you have a hiring algorithm, for example, that is highly specialized for each company that uses it, educating those users on how confirmation bias or recency bias can emerge based on how the algorithm is used can reduce the potential harm to all relevant stakeholders.
  3. Diversity of thought — is an excellent way for an organization to identify the risks associated with an algorithm before they arise. Whether at the development stage, the testing stage, or the implementation stage of an algorithm, the more perspectives an organization can have, the more potential harms can be identified. This goes beyond just race and sex distribution of a development team. Look at your organization and ask, are these people from the same part of the same country? Are they all graduating from a small handful of universities? Do they all have the same degrees? Have they all worked for the same handful of tech companies? It is also essential to constantly try to make your team more diverse and have a work culture that promotes speaking out about various risks associated with the products the organization is developing.
  4. Transparency — helps users and relevant stakeholders of an algorithm understand the lengths an organization has gone to identify and mitigate the risk associated with their algorithms. By being transparent on the potential risk associated with an algorithm, users and stakeholders can use these systems in a context that specifically reduces the direct risk to them. When an organization incorporates user feedback, positive feedback loops can be created that allow an organization to react and adapt to newly emerging risks from their algorithms and autonomous systems.
  5. Implementing metrics for bias, accuracy, and other technical metrics — will allow organizations to test and improve their algorithms continuously, as bias can occur at any stage of an algorithm’s life. This will also help an organization align with the EU AI Act’s notion of post-market monitoring. Furthermore, an organization should constantly be thinking about new metrics to test.
  6. Algorithm Risk and Impact Assessment — is another very effective tool in helping reduce the risk associated with algorithmic bias. These assessments can help identify relevant stakeholders and how they could be harmed by identifying potential risks in the socio-technical systems in which an algorithm exists. When a company or organization understands the potential risk associated with its algorithm, it can put measures in place to identify and mitigate those risks.
  7. Testing with representative datasets — is an excellent way for an organization to identify where there needs to be an improvement to their algorithms. After doing a risk assessment or taking customer feedback to identify a group of people that your algorithm could potentially harm. It is also important to use an ethically sourced data set to evaluate if those people are affected by detecting algorithmic bias in the algorithm.
  8. Explainable and reproducible results — can be provided to ensure that an organization has considered the various risks associated with its algorithms and tested for them. Explainable and reproducible results can spot times when an algorithm relies on features that could lead to bias or discrimination in particular circumstances.
  9. Algorithm Audit — is one of the most effective ways to mitigate the risk associated with artificial intelligence algorithms and autonomous systems. There are many ideas as to what an algorithmic audit is, and as more laws and regulations emerge, like the New York City Algorithmic Bias Law (Local Law 144) and the EU AI Artificial Intelligence Act, we will eventually have one clear definition. For the sake of this article, we look at companies like BABL AI, who define an algorithmic audit as a criteria-based process audit done by trained auditors to independently verify and impartially evaluate the claims made by an organization. This allows an organization to test for disparate impact of their algorithms, what internal governance exists and how it is managed, monitored, and relates to the risk of algorithmic bias, and a socio-technical risk assessment. After an organization meets these criteria, the auditors verify the claims made based on the criteria established in the audit.

Implementing these 9 things into an organization’s production, testing, and management of algorithmic systems can help reduce the risk of algorithmic bias.

--

--

Jeffery Recker

I am the COO of BABL AI, a company that audits AI algorithms for ethical bias and legal compliance. Follow me on LinkedIn www.linkedin.com/in/jeffery-recker