Unlocking the Secret to Eliminating Bias in AI

UrbanExplorerJulia
Women in Technology
11 min readApr 6, 2024

Artificial Intelligence (AI) stands as a beacon of innovation across industries, yet it is not immune to ingrained biases, which can mirror and perpetuate existing societal disparities.

Bias in AI manifests through historical data, obscured decision-making processes, and a lack of diversity in AI design teams, risking the delivery of services that are not equitable for all users (Nazer et al., 2023).

The complexity of AI systems demands robust, ethical frameworks from inception to implementation, ensuring that the biases arising at each phase can be identified and mitigated.

To counteract these biases, preventative strategies need implementation throughout the AI lifecycle. Pre-processing techniques address data representation issues, while in-processing adjustments and post-processing reviews refine learning algorithms and their outcomes, ensuring fair and just applications (Bellamy et al., 2019).

These biases can be supplemented by tools such as IBM’s AI Fairness 360, which offers valuable metrics and debiasing algorithms (IBM Research, 2018), and comprehensive guidelines from frameworks, providing a structured approach to ethical AI development like DECIDE-AI and CONSORT-AI and others in the healthcare or financial sectors. (Mehrabi et al., 2021).

This briefing paper aims to inform technologists, developers, and industry stakeholders to understand the biases that can underpin AI and the dilemma this presents to society. The ethical handling of AI bias will shape its immediate application and future application within our industry.

AI’s increasing role in various sectors signals vast improvements in efficiency and decision-making. Yet, alongside its advancements, AI presents significant ethical challenges, chiefly biases that could perpetuate unfairness and exacerbate social inequities (Frost, 2024).

Key biases include racial/ethnic bias, gender bias, age bias, disability bias, English as a second language (ESL), and socio-economic status. Furthermore, biases can also be embedded within the machine learning development pipeline as seen in Figure 3.

AI’s algorithms, intended to deliver impartiality, could unintentionally perpetuate deep-rooted biases as correlations rather than causations often drive them. The reliance on historical data in the development of AI often carries with it the historical prejudices intrinsic to that data. These issues go beyond technical errors to reflect broader societal concerns that AI needs to address.

Figure 1: Image retrieved from New York Times Article, Dealing with Bias In AI.

This briefing paper explores the pervasive biases embedded in AI, from the historical data that informs it to the development and deployment stages. It highlights the hidden biases in AI’s complex systems, and the risk they pose for continuing social inequalities due to a lack of thorough oversight (Frost, 2024).

This paper emphasizes the need for clear methodologies such as Explainable AI (XAI) and comprehensive regulatory measures (Nazer et al., 2023) to steer AI development in a direction that ensures technological progress aligns with ethical principles.

Background

The rise of generative AI, notably OpenAI’s ChatGPT, marks a significant shift in technology availability, moving from the hands of large tech corporations to the wider public who only require an internet connection to access AI. As AI becomes ingrained in essential industries such as healthcare and financial services, it promises unparalleled efficiency and data processing capabilities.

This advancement is shadowed by the risk of AI perpetuating historical biases ingrained in its training data, which is a cause for concern in domains that directly affect human lives. The inherent opacity of deep learning models presents further challenges in bias detection, calling for stringent ethical oversight to prevent the exacerbation of social disparities (Frost, 2024).

Critical to the bias mitigation effort are frameworks such as TEHAI and CONSORT-AI in the healthcare context. Tools like IBM’s AI Fairness 360 provide guidance and methods for detecting and counteracting bias during AI development (Bellamy et al., 2019).

Ensuring diversity and inclusivity in AI development teams is crucial to diminish the reflection of historical prejudices in AI outputs (Frost, 2024). The need for AI systems to embody the principles of fairness and equity, thereby avoiding the reinforcement of social inequalities (Buolamwini, 2017, Mehrabi et al., 2021; Dankwa-Mullan et al., 2021; Smith, 2019)

Gartner emphasises five core AI principles: Accountability, Secure and Safe, Explainable and Transparent, Fair, Human-Centric, and Socially Beneficial, as fundamental to ethical AI practice (Frost, 2024).

Figure 2: This model was produced by Gartner, summarising five AI Ethics Principles. From Frost, D. (2024, February 19). Quick Answer: How Can Explainable AI Eliminate Bias and Expand Lending? Retrieved from Gartner: https://www.gartner.com/document-reader/document/5205363?ref=solrAll&refval=404666069

3. MAIN ETHICAL DILEMMA

Alongside AI’s many benefits, it faces challenges related to inherent biases. According to Gartner research, The critical reasons contributing to AI biases include:

1. Data-Driven Bias: AI systems inherit historical prejudices, especially when datasets include biased variables, such as postal codes or job titles, leading to discrimination (Frost, 2024).

2. Lack of Representation: AI training often overlooks the full spectrum of user demographics, marginalising minority groups. (Frost, 2024).

3. Model Opacity: Complex AI models, especially deep learning, obscure decision rationales, hiding biases (Frost, 2024).

4. Historical Prejudices: AI trained on data from institutions with discriminatory pasts may continue those legacies (Frost, 2024).

5. Complex Algorithmic Decision-Making: Intricate AI algorithms can conceal biases within their decision processes (Frost, 2024).

6. Governance Shortfalls: Insufficient oversight in AI development allows biases to persist (Frost, 2024).

7. Inadequate Mitigation of Biases: While strategies to counteract bias exist, their implementation often falls short, leaving biases inadequately addressed (Frost, 2024).

8. Creator Biases in AI Design: Developers’ subjective perspectives may unintentionally influence AI systems, stressing the need for diverse development teams (Frost, 2024).

The primary ethical challenge remains the risk of AI entrenching societal biases, as seen in the COMPAS system’s biased risk assessments, misjudging the risk of the probability of reoffending disproportionately against black defendants. Another example is the bias in job advertising algorithms against women, with the AI algorithm choosing not to advertise high-paying jobs to women (Eirini Ntoutsi, 2020). Obermeyer highlights the importance of transparency in AI’s clinical use, citing instances of racial bias in healthcare algorithms. Such biases threaten to perpetuate historical injustices.

Institutions are turning to Explainable AI (XAI) to counteract these issues, seeking transparency and fairness within AI models. The AI Fairness 360 toolkit (Bellamy et al., 2019) exemplifies these efforts, providing metrics to assess and mitigate biases. Initiatives like the Monetary Authority of Singapore’s Veritas toolkit support industry-wide shifts towards more equitable AI frameworks. (Frost, 2024).

To mitigate biases, a three-way approach is recommended:

1. Pre-processing: This initial step seeks to cleanse the data of biases before AI integration, aiming to equalise data representation (refer to Figure 3).

2. In-processing: This involves embedding ethical guidelines within the AI learning algorithms to discourage biased predictions.

3. Post-processing: Implemented after AI training, this stage adjusts AI outputs to fairly treat all user groups.

Despite this three-way approach, challenges remain. The nuanced nature of bias may not be entirely mitigated and offering superficial, rather than comprehensive solutions cannot fully resolve deep-seated issues.

Compounding the dilemma is the lag in legal frameworks which struggle to align with the fast-paced AI advancements, often lacking clear directives for handling AI’s decision-making. If AI involves a lack of decision-making transparency, it may lead to unintended discriminatory results (Eirini Ntoutsi, 2020).

The resolution of AI bias is a complex endeavour that extends beyond technical resolutions, with human oversight and accountability needing to be prioritised. Refer to Figure 3 below, which outlines the many biases that can occur at all stages of the machine-learning development pipeline.

Figure 3: Machine Learning Pipeline. From Nazer, L., Razan Zatarah, Waldrip, S., Janny, Moukheiber, M., Khanna, A. K., Hicklen, R. S., Lama Moukheiber, Moukheiber, D., Ma, H., & Mathur, P. (2023). Bias in artificial intelligence algorithms and recommendations for mitigation. PLOS Digital Health, 2(6), e0000278–e0000278.https://doi.org/10.1371/journal.pdig.0000278

CRITICAL ANALYSIS FRAMEWORK APPLIED TO BIAS IN AI:

Understanding the Situation: As AI evolves and becomes a digital reality the ethical implications surrounding its development, implementation, and governance cannot be ignored. Biases in AI systems can occur when historical data, model opacity, and inadequate governance intersect, as detailed

Identifying the Stakeholders: Key stakeholders include AI developers, users, underrepresented groups, industry stakeholders, policymakers, researchers, ethicists, and society at large. Secondary stakeholders encompass regulatory bodies and future users of AI technologies.

Isolating the Major Ethical Dilemma: The crucial question is whether AI developers will implement effective strategies to mitigate biases, fostering an equitable technological environment that strikes a balance between fostering innovation and human intervention.

What are the Legal Implications? There is a current lack of specific legal regulations according to the Australian Securities and Investment Commission on AI biases (Longo, 2024). Which calls for the creation of comprehensive frameworks to oversee ethical AI development.

Informal and Formal Guidelines: Biased AI would likely fail ethical ‘tests’ and contravene established policies on equality, demanding rigorous adherence to ethical standards.

Ethical Principles: Considering consequentialist and deontological ethics, AI biases unjustly harm marginalised groups, violating duties towards equitable treatment (Stahl, 2021).

Strategies for Mitigating Bias in the AI Machine Learning Pipeline:

1. Pre-Processing: Neutralising data biases before AI integration.

2. In-Processing: Incorporating ethical guidelines into AI’s learning phase.

3. Post-Processing: Adjusting AI outputs post-training for fairness.

Tools and Initiatives: The AI Fairness 360 toolkit (IBM Research, 2018) and other initiatives provide frameworks for fair AI development, emphasising transparency and accountability (Bellamy et al., 2019).

AI in healthcare promises to improve diagnoses and patient care but is plagued by biases reflecting historical data’s racial, gender, and socioeconomic inequalities (Nazer et al., 2023; Wiens et al., 2019). These biases manifest through several issues:

Sampling Bias: Non-representative datasets fail to serve all demographics equally, potentially worsening disparities (Nazer et al., 2023).

Measurement Bias: Biased data can skew AI diagnoses and treatments, affecting healthcare quality (Nazer et al., 2023).

Label Bias: Disparate data labeling can lead to unequal allocation of healthcare resources (Nazer et al., 2023).

Missing Data: Omissions in data can neglect specific group needs, increasing marginalisation (Nazer et al., 2023).

Validation of AI systems must be thorough to avoid overfitting and biases (Nazer et al., 2023). Continuous post-deployment assessments are crucial to address data drifts, ensuring ongoing fairness.

A multidisciplinary approach, including diverse stakeholders and experts, is key to developing unbiased AI (Dankwa-Mullan et al., 2021; van de Sande et al., 2022). Tools like IBM’s AI Fairness 360 help detect and correct biases (Bellamy et al., 2019), and initiatives like STANDING TOGETHER promote inclusive AI practices (Nazer et al., 2023).

Refer to Figure 4, which presents detailed Bias Mitigation Strategies in the development phases and implementation of AI.

Figure 4: Bias Mitigation Framework. From Nazer, L., Razan Zatarah, Waldrip, S., Janny, Moukheiber, M., Khanna, A. K., Hicklen, R. S., Lama Moukheiber, Moukheiber, D., Ma, H., & Mathur, P. (2023). Bias in artificial intelligence algorithms and recommendations for mitigation. PLOS Digital Health, 2(6), e0000278–e0000278. https://doi.org/10.1371/journal.pdig.0000278

Conclusion

The integration of AI into our digital infrastructure brings forth transformative advancements but also raises significant ethical concerns. The biases inherent in AI, derived from historical data and embedded in algorithms, pose a risk to equitable service delivery across sectors. These biases, if unchecked, can further entrench societal inequalities. To address this, it’s crucial to engage in rigorous bias mitigation throughout the AI development lifecycle. This includes:

1. Pre-processing to neutralise historical biases in data.

2. In-processing to integrate ethical guidelines within AI algorithms.

3. Post-processing to ensure outcomes are fair and unbiased.

Frameworks such as IBM’s AI Fairness 360 toolkit and other initiatives offer valuable metrics for debiasing AI, underscoring the necessity for transparency and accountability (Bellamy et al., 2019). The collaboration of a diverse array of stakeholders, from policymakers to AI developers, is essential to craft AI solutions that are not only efficient and innovative but also fair and just.

The ethical management of AI bias will not only impact its immediate applications but will also shape its long-term legacy. It’s imperative to ensure that AI’s progression reflects the core principles of equality and fairness, ultimately fostering a digital environment that is inclusive and beneficial for all members of society. Through dedicated efforts to understand, identify, and resolve biases in AI, we can pave the way for a future where technology acts as an enabler of opportunity rather than a divider.

© Julia Urban, 2024. All rights reserved. No part of this publication may be reproduced, distributed, or transmitted in any form or by any means, including photocopying, recording, or other electronic or mechanical methods, without the prior written permission of the author, except in the case of brief quotations embodied in critical reviews and certain other noncommercial uses permitted by copyright law.

References

Bellamy, R. K. E., Mojsilovic, A., Nagar, S., Ramamurthy, K. N., Richards, J., Saha, D., Sattigeri, P., Singh, M., Varshney, K. R., Zhang, Y., Dey, K., Hind, M., Hoffman, S. C., Houde, S., Kannan, K., Lohia, P., Martino, J., & Mehta, S. (2019). AI Fairness 360: An extensible toolkit for detecting and mitigating algorithmic bias. IBM Journal of Research and Development, 63(4/5), 4:1–4:15. https://doi.org/10.1147/jrd.2019.2942287

Buolamwini, J. (2017, March 9). How I’m fighting bias in algorithms. Www.ted.com. https://www.ted.com/talks/joy_buolamwini_how_i_m_fighting_bias_in_algorithms?language=en:

Dankwa-Mullan, I., Scheufele, E. L., Matheny, M. E., Quintana, Y., Chapman, W. W., Jackson, G., & South, B. R. (2021). A Proposed Framework on Integrating Health Equity and Racial Justice into the Artificial Intelligence Development Lifecycle. Journal of Health Care for the Poor and Underserved, 32(2), 300–317. https://muse.jhu.edu/pub/1/article/789672/summary

DECIDE-AI: new reporting guidelines to bridge the development-to-implementation gap in clinical artificial intelligence. (2021). Nature Medicine, 27(2), 186–187. https://doi.org/10.1038/s41591-021-01229-5

Eirini Ntoutsi, P. F.-E.-K. (2020, February 03). Bias in data-driven artificial intelligence systems — An introductory survey. Retrieved from WIREs (Wiley Interdisciplinary Review) : https://wires.onlinelibrary.wiley.com/doi/full/10.1002/widm.1356

Frost, D. (2024, February 19). Quick Answer: How Can Explainable AI Eliminate Bias and Expand Lending? Retrieved from Gartner: https://www.gartner.com/document-reader/document/5205363?ref=solrAll&refval=404666069

Gichoya, J. W., Thomas, K. J., Leo Anthony Celi, Safdar, N. M., Banerjee, I., Banja, J. D., Laleh Seyyed-Kalantari, Trivedi, H., & Saptarshi Purkayastha. (2023). AI pitfalls and what not to do: Mitigating bias in AI. British Journal of Radiology, 96(1150). https://doi.org/10.1259/bjr.20230023

IBM Research. (2018, September 20). AI Fairness 360. Aif360.Res.ibm.com. https://aif360.res.ibm.com/

Longo, J. (2024, January 31). We’re not there yet: Current regulation around AI may not be sufficient. Asic.gov.au. https://asic.gov.au/about-asic/news-centre/speeches/we-re-not-there-yet-current-regulation-around-ai-may-not-be-sufficient/#:~:text=As%20the%20interim%20report%20noted

Mehrabi, N., Morstatter, F., Saxena, N., Lerman, K., & Galstyan, A. (2021). A Survey on Bias and Fairness in Machine Learning. ACM Computing Surveys, 54(6), 1–35. https://doi.org/10.1145/3457607

Nazer, L., Razan Zatarah, Waldrip, S., Janny, Moukheiber, M., Khanna, A. K., Hicklen, R. S., Lama Moukheiber, Moukheiber, D., Ma, H., & Mathur, P. (2023). Bias in artificial intelligence algorithms and recommendations for mitigation. PLOS Digital Health, 2(6), e0000278–e0000278. https://doi.org/10.1371/journal.pdig.0000278

Obermeyer, Z., & Mullainathan, S. (2019). Dissecting Racial Bias in an Algorithm that Guides Health Decisions for 70 Million People. Proceedings of the Conference on Fairness, Accountability, and Transparency. https://doi.org/10.1145/3287560.3287593

Smith, C. (2019, November 19). Dealing With Bias in Artificial Intelligence. The New York Times. https://www.nytimes.com/2019/11/19/technology/artificial-intelligence-bias.html

Stahl B. C. (2021). Concepts of Ethics and Their Application to AI. Artificial Intelligence for a Better Future: An Ecosystem Perspective on the Ethics of AI and Emerging Digital Technologies, 19–33. https://doi.org/10.1007/978-3-030-69978-9_3

van de Sande, D., Van Genderen, M. E., Smit, J. M., Huiskens, J., Visser, J. J., Veen, R. E. R., van Unen, E., Ba, O. H., Gommers, D., & Bommel, J. V. (2022). Developing, implementing and governing artificial intelligence in medicine: a step-by-step approach to prevent an artificial intelligence winter. BMJ health & care informatics, 29 (1), e100495. https://doi.org/10.1136/bmjhci-2021-100495

Vilas, D., Preparing technology teams to make ethical decisions — Free courses to help you build AI skills Video Tutorial | LinkedIn Learning, formerly Lynda.com. (2020). LinkedIn. Retrieved March 31, 2024, from https://www.linkedin.com/learning/ethics-in-the-age-of-generative-ai/preparing-technology-teams-to-make-ethical-decisions?resume=false

Wiens, J., Saria, S., Sendak, M., Ghassemi, M., Liu, V. X., Doshi-Velez, F., Jung, K., Heller, K., Kale, D., Saeed, M., Ossorio, P. N., Thadaney-Israni, S., & Goldenberg, A. (2019). Do no harm: a roadmap for responsible machine learning for health care. Nature Medicine, 25(9), 1337–1340. https://doi.org/10.1038/s41591-019-0548-6

Wolff, R. F., Moons, K. G. M., Riley, R. D., Whiting, P. F., Westwood, M., Collins, G. S., Reitsma, J. B., Kleijnen, J., & Mallett, S. (2019). PROBAST: A Tool to Assess the Risk of Bias and Applicability of Prediction Model Studies. Annals of Internal Medicine, 170(1), 51. https://doi.org/10.7326/m18-1376

--

--

UrbanExplorerJulia
Women in Technology

Passionate and dedicated to writing, dancing , self-development, learning about technology and business. Embracing the journey with joy and purpose. 🌟✍🏼