Biased Artificial Intelligence

Consequences and opportunities in a digital era

Aine Ní Fhlanagáin
CodeX
7 min readSep 16, 2021

--

Photo by Possessed Photography on Unsplash

I write code.

I’ve also come to realise that this code could have unintended consequences on the employment prospects, loan approvals and health outcomes of a complete stratum of society.

This realisation prompted me to delve deeper into the notion of bias in artificial intelligence and its unintended consequences in real world scenarios.

However, it’s not all doom and gloom! It’s possible to build AI systems that are more robust against bias and discrimination. Furthermore, a partnership between human and machines could actually lead to improvements in the fairness of human decision making. My intention in this blog is to focus explicitly on the ways in which a biased system directly affects a minority group and steps we can take to fix it. (Of course, there are many other ways that machines effect minority groups. For example, through the uneven distribution of wealth and economic surplus that is created by machines. However, that is a conversation for another day…)

Systems of Discrimination

Photo by Possessed Photography on Unsplash

Artificial Intelligence systems function, fundamentally, as systems of discrimination. At their very core, they are classification technologies that “differentiate, rank and categorize.” (West, 2019). Tools like decision trees, reinforcement learning, neural networks and predictive analytics work by processing historical data to predict future outcomes. Is it, therefore, all that surprising that the outputs of these systems reflect the current discriminatory balance of power in society? The costs of biased AI are borne by the same people that have suffered historically. That is: gender minorities, people of colour and ethnic minorities.

Unintended Consequences

The real world consequences of biased systems are problematic, and various examples of system mishaps have been demonstrated in recent years. Apple’s co-founder Steve Wozniak’s wife, Janet Hill, was allocated a “credit limit that was a mere 10 percent of her husbands” on an Apple card. Prominent software engineer David Heinemeier Hansson blamed a “sexist black-box algorithm” when he received a credit limit “20 times higher that his wife” on the Apple card. (Crawford, 2019)

Even image recognition software fails to see the world through a universal lens. Image recognition systems continue to miscategorise black faces — in particular black female faces. Google Photos infamous tagging mistake labelled black people as gorillas. Meanwhile, “sentencing algorithms discriminate against black defendants, chatbots easily adopt racist and misogynistic language when trained on online discourse and Uber’s facial recognition doesn’t work for trans drivers.” (West, 2019)

In 2018 Reuters reported on an experimental hiring tool at Amazon that discriminated against women; downgrading any resume that included the word “women’s” or resumes from candidates who attended all women colleges. After trying — and failing — to correct the tool, it was eventually abandoned by software engineers at Amazon. It would seem that “gender-based discrimination was built too deeply within the system — and in Amazon’s past hiring practices — to be uprooted using a purely technical approach.” (West, 2019)

Steps to fairness in AI

So how can we tackle these issues? The AI development process is a good place to start. However, we also need to consider the importance of critical reasoning and soft-skills on computer science curricula.

Photo by Possessed Photography on Unsplash

The AI development process:

At a software development level, the entire algorithmic process needs to be considered for bias. Bias can be introduced in artificial intelligence systems at several points: before, during and after modelling:

  1. Bias in modelling.

Developers need to closely investigate any assumptions that are being made during the initial modelling process. All general assumptions should be scrutinized to ensure that they contain limited bias.

2. Bias in training.

Bias can be easily introduced during the training of test data. Data used for training and testing may contain biased human decisions or reflect social inequalities that mirror the societal power structures of the time and place. For example, “word embeddings (a set of natural language processing techniques) trained on news articles may exhibit the gender stereotypes found in society.” (Silberh & Manyika, 2019). Test data should be cleaned to ensure that a model receives unbiased inputs.

3. Bias in usage.

Developers need to also consider the efficacy of their model for decision making in a specific business context or task. How is the model being used? Will it be used in conjunction with human decision-making? Will it replace human decision-making entirely? Introduction of bias at this point in the development of an AI system is considered “bias in usage”. (Silberh & Manyika, 2019)

The role of computer science educators

Photo by Vasily Koloda on Unsplash

Educators have a role to play in shaping the practice of future programmers. Harvard University has recognised the urgency of ethical reasoning as a key skill in the toolkit of today’s computer scientists. In response to the urgent need for ethically aware scientists, it has developed a course called Embedded EthiCS which teaches students to identify, reason through, communicate and design ethically and socially responsible systems. (Embedded EthiCSTM @ Harvard:, 2020) Course modules are embedded into the entirety of the computer science curricula, spanning themes such as Artificial intelligence, ethical hacking and embedded bias.

Course alumni not only understand how to build AI systems, but are also aware of the potential risks and consequences of the systems that they build. They can balance the potential trade-offs between performance and correctness and can communicate this reasoning in a business context to influence decision making during system builds in a real-life setting. They have been taught to build systems that are not just algorithmically optimal, but are also optimal from a fairness and ethical standpoint.

Biased humans or biased computers?

The approaches so far tackle AI bias by critically evaluating the technology. Yet this assumes that technology - alone - is to blame. Instead, humans must acknowledge the fundamental flaws and biases in their own decision-making that AI has helped to expose.

AI algorithms often use historical data to predict future events. Could we, therefore, analyse the biased AI outputs to improve the fairness of human systems and human decision making? Doing so would improve societal standards of bias and fairness.

Could we, therefore, analyse the biased AI outputs to improve the fairness of human systems and human decision making?

At present, society accepts outcomes that derive from a process that is considered “fair” (such as an evaluation rubric). Compositional fairness, a proxy humans use to reduce bias in decision making systems, is an example of this. This proxy assumes that if a group making a decision contains a diversity of viewpoints, then its decision making must therefore be fair. (Silberh & Manyika, 2019). But is procedural fairness and outcome fairness really the same thing? Should we be holding humans more accountable as well?

Conclusion

As outlined, there are several ways to tackle the potentially harmful consequences of bias in AI. From a development standpoint, critical evaluation of an AI model is a good place to start. Developers need awareness of the critical points at which bias can be introduced during AI development (modelling, training or context in which it is used). They also need a firm grounding in ethics and critical reasoning to question and understand the inherent real-world risks of their system design. The ability to weigh up the pros and cons of performance versus correctness is another essential skill, as is the ability to communicate and influence ethical decision-making in real business scenarios. Educators need to consider these skills gaps and adapt their curricula accordingly.

In cases where algorithms trained on human decisions are shown to be biased, we can use the insight to correct our own decision making. We can question the data, draw conclusions about biased underlying human behaviour and implement policies to ensure less discriminatory decisions in future.

If we leverage this potential, artificial intelligence could completely transform our own human standards of equality and fairness.

Sources:

Artificial Intelligence and Machine Learning: Ethics, Governance and Compliance Resources. (2020, Dec 1). Retrieved from Future of Privacy Forum: https://fpf.org/artificial-intelligence-andmachine-learning-ethics-governance-and-compliance-resources/ Crawford, K. R. (2019).

AI Now 2019 Report. New York: NYU. Retrieved from https://ainowinstitute.org/AI_Now_2019_Report.pdf

Embedded EthiCSTM @ Harvard:. (2020, Dec 3). Retrieved from https://embeddedethics.seas.harvard.edu/: https://embeddedethics.seas.harvard.edu/module.html

Top 9 Ethical Issues in Artificial Intelligence. Retrieved from World Economic Forum: https://www.weforum.org/agenda/2016/10/top-10-ethical-issues-in-artificialintelligence/ Silberh, J., & Manyika, J. (2019, June).

Notes from the AI frontier. Tackling bias and humans. Retrieved from https://www.mckinsey.com/~/media/mckinsey/featured%20insights/artificial%20intelligenc e/tackling%20bias%20in%20artificial%20intelligence%20and%20in%20humans/mgi-tacklingbias-in-ai-june-2019.pdf6A4347A50249C4B9 West, S. W. (2019).

Discriminating Systems: Gender, Race, and Power in AI. New York: AI Now Institute. Xavier, F., Van Nuenen, T., & Such, J. M. (2020). Bias and Discrimination in AI. eprint arXiv:2008.07309, 1.

--

--