Racial Bias and Gender Bias in AI systems

lex fefegha
The Comuzi Journal
Published in
13 min readSep 2, 2018
BIAS

I have been thinking of interactive ways of getting my masters thesis on Racial Bias, Gender Bias, AI + new ways to approach Human Computer Interaction out to everyone.

Life has been super busy so I have decided to add snippets of the thesis for now.

So here it goes:

For this research paper, the researcher has identified a number of areas of concern in regards to systems powered by AI being deployed in situations that affect the lives of humans. These examples will be used to further highlight this area of concern.

Suggestions have made that decision-support systems powered by AI can be used to augment human judgement and reduce both conscious and unconscious biases (Anderson & Anderson, 2007). However, machine learning data, algorithms, and other design choices that shape AI systems may reflect and amplify existing cultural prejudices and inequalities (Sweeney, 2013). While, historian of technology Melvin Kranzberg (1986) constructed the viewpoint that technology is regarded as neutral or impartial.

An counter argument is that AI systems could employ biased algorithms that do significant harm to humans which could go unnoticed and uncorrected, until it is too late.

Racial Bias

ProPublica, a nonprofit news organisation, had critically analysed risk assessment software powered by AI known as COMPAS. COMPAS has being used to forecast which criminals are most likely to reoffend.

Guided by these risk assessments, judges in courtrooms throughout the United States would generate conclusions on the future of defendants and convicts, determining everything from bail amounts to sentences.

The software estimates how likely a defendant is to re-offend based on his or her response to 137 survey questions (an example of the survey is shown in fig. 1).

Figure 1: ’COMPAS Survey’, Julia Angwin et al. (2016)

ProPublica compared COMPAS’s risk assessments for 7,000 people arrested in a Florida county with how often they reoffended (Angwin et al; 2016; Garber, 2016; Liptak, 2017).

It was discovered that the COMPAS algorithm was able to predict the particular tendency of a convicted criminal to reoffend. However, when the algorithm was wrong in its predicting, the results was displayed differently for black and white offenders.

Through COMPAS, black offenders were seen almost twice as likely as white offenders to be labeled a higher risk but not actually re-offend. While, the COMPAS software produced the opposite results with white offenders: they were identified to be labeled as lower risk more likely than black offenders despite their criminal history displaying higher probabilities to reoffend (examples of results are shown in fig. 2–5).

Figure 2–5: ‘COMPAS Software Results’, Julia Angwin et al. (2016)

For context, to highlight the impact of a software such as COMPAS, the United States imprisoned 1,561,500 individuals in federal and state correctional facilities.

The United States imprisons more people than any country in the world, a large percentage of those imprisoned are black (Carson, 2015; Wagner & Walsh, 2016).

Race, nationality and skin color played a contributing role in composing such assessments and predictions until the 1970s, when research studies uncovered implications leading to those attributes to be regarded politically unacceptable (Harcourt, 2010; Kehl et al; 2017).

In 2014, the former U.S. Attorney General Eric Holder advised that the risk assessment scores are possibly implanting bias into courts environments (Barrett, 2014; Justice.gov, 2014).

Despite this discovery, the research study by ProPublica were disputed by a group of PhD researchers (Flores et al; 2016). Their viewpoints were that the results by Propublica contradict a number of existing studies concluding that risk assessment scores can be predicted free of racial and gender bias. The PHD researchers’ conclusion of their research was that it is actually impossible for a risk score to satisfy both fairness criteria at the same time.

This is due to the developers of the COMPAS software, Northpointe refusing to disclose the details of its proprietary algorithm, making it impossible for researchers to assess the extent to which its algorithm may be unfair.

Revealing hidden information regarding their algorithm could limit Northpointe’s goal in being a competitive business (Zhu & Zhou, 2011; Sacks, 2015). However, the action raises questions about government departments entrusting for-profit companies to develop risk assessment software of this nature.

In an court case, the Supreme Court of Wisconsin examined the validity of using the COMPAS risk-assessment software in the sentencing of an individual (Kirchner, 2016). The court case has been positioned by the media as one in the United States that has been a first in addressing concerns regarding a judge being assisted by an automated software-generated risk assessment score (Liptak, 2017; Harvard Law Review, 2017; Garber, 2016).

The Supreme Court ruled for the continued use of COMPAS to aid judges with sentencing decisions. However the court expressed hesitation about the future use of the software in sentencing without information highlighting the limitations of COMPAS. (Supreme Court of Wisconsin, 2016).

The court made a number of deliberations of its limitations (Supreme Court of Wisconsin, 2016, p.5 — p.48):

1. COMPAS is a proprietary software, which the developers of the software have obviated the disclosure of explicit information about the impact of its risk factors or how risk assessment scores are calculated.

2. COMPAS risk assessment scores are established on group data, and therefore the software classified groups with characteristics that designate them as high-risk offenders but not high-risk individuals.

3. There has been a number of research studies which have proposed that the COMPAS algorithms develop biased results in how it analyse black offenders.

4. COMPAS measures defendants/offenders to a national sample, but the software does not engage in a cross-validation study for a local population. Foresighting potential issues as the COMPAS software must be regularly monitored and updated for accuracy as populations adjust.

5. The COMPAS software original purpose is not for sentencing but rather as an assistive tool in assessing an individual.

Gender Bias

In contrast to racial bias, there has been literature highlighted on its impact on the lives of humans in regards to algorithms being programmed into AI systems. Literature written about gender bias is still at the early stages, most of the content written about the topic are news articles that haven’t been backed with academic studies (Fessler, 2017; Bass & Huet, 2017).

An academic paper of interest which has led the debate regarding this topical area of concern is one titled Semantics derived automatically from language corpora contain human-like biases (Caliskan et al; 2016) which was published in leading academic journal Science. There had been a number of research written about word embeddings and its applications from Web search (Nalisnick et al; 2015) to the dissection of keywords in CVs (Tosik et al; 2015).

However, prior research did not recognise the sexist associations of word embeddings and its potential introduction of different biases into different software systems. The researchers employed a benchmark for documented human biases, the Implicit Association Test (IAT). The IAT has been adopted by numerous social psychology studies since its development (Greenward et al; 1998).

As shown in figure 6–8, the test measures response times by human participants who are asked to pair word concepts displayed on a computer screen. Although the IAT has received criticism from academics (Azar, 2008; Rothermund & Wentura, 2004) in regards to the validity of its research findings, the test contributed a considerable role in the direction of this particular study.

Figure 6–8: ‘IAT Test Examples’, Taken from a number of sources.

The group of researchers constructed an experiment with an web crawler, which was programmed to functioned as an artificial intelligent agent participating in an Implicit Association Test.

The algorithm used in this experiment is similar to one that a startup technology company, who may be providing a service powered by AI which analyses CVs would employ at the core of its product.

The algorithm is able to produce co-occurrence statistics of words — words that often appear near one another have a stronger association than those words that rarely do (Pennington et al; 2014; Macfarlane, 2013).

The Stanford researchers employed the web crawler to act as an automatic indexer (Kobayashi & Takeda, 2000) on a colossal fishing of contents from the internet, containing 840 billion words.

Once the indexing of information was completed, the group of researchers examined sets of target words while sifting through the extensive amount of content looking for data that would inform them of the potential biases humans can unwittingly possess.

Example words were ‘programmer, engineer, scientist, nurse, teacher and librarian’ while, the two sets of attribute words were man/male and woman/female.

In the results, the data highlighted biases such as the preference for ‘flowers over bugs’ (which could be illustrated as a harmless bias), however the data identified bias along themes associated with gender and race.

A particular case is the autonomous intelligent agent associating feminine names more with words attributed with family such as ‘parents’ and ‘wedding’ than names of a masculine nature. On the other hand, masculine names had stronger associations with words attributed with career such as ‘professional and salary’ (Caliskan et al; 2016).

Caliskan et al make note that the learning experiment findings replicated the extensive proof of bias found in a number of previous Implicit Association Test studies, which has involved human participants. It could be suggested that these findings can be recognisable in objective reflections of how humans live.

There have been statistics highlighting unequal distributions of occupation types with respect to gender — The UK has the lowest percentage of female software engineering professionals in Europe, at less than 10% according to the The Women’s Engineering Society (2018).

The project highlighted that the biases in the word embedding are in fact closely aligned with social conception of gender stereotypes. Stereotypes have been described as both unconscious and conscious biases that are held among a group of people (Devine, 1989; Greenwald & Banaji, 1995; Hilton & von Hippel, 1996). A number of research studies have explored stereotypes playing a contributory position towards the data being used to train A.I (Bolukbasi et al, 2016; Bashir et al, 2013).

Literature Recommendations

From an human-computer interaction perspective, when a technological innovation such as artificial intelligence is built into complex social systems such as criminal justice, health diagnoses, academic admissions, hiring and promotion, it may reinforce existing inequalities, regardless of the intentions of the technical developers.

In regards to the ethical, questions develop such as what are ‘decisions’? How would an artificial system be able to make the ‘right’ choices to arrive at a calculated conclusion? What is the certain algorithmic instructions and data inputs that has being programmed into an artificial system giving it agency to judge and offer a verdict in the mannerisms of a human? In regards to A.I ethics, what does the word “think” really mean?

There has been growing scholarly literature and research about algorithmic biases in regards to AI.

A number of researchers have proposed technological techniques on how to alleviate it (Buolamwini & Gebru, 2018; Barocas & Selbst, 2016; Garcia, 2016; Kirkpatrick, 2016; Pedreschi, et al., 2008;).

Other arguments include those by experts and commentators in the field who have made a number of recommendations that systems powered by AI should always be applied in a transparent manner, and without prejudice or bias.

Reflecting on the COMPAS case study, the code of the algorithm and the process for applying it must be open to the public. A transparent approach would allow courts, companies, researchers, governments, and others to understand, monitor, and suggest improvements to algorithms (Oswald & Grace, 2016).

A recommendation which currently reflects the technology industry, is the need for racial and gender diversity among developers, researchers, scientists working on AI. Viewpoints are that a diverse team would be able to address insensitive or under-informed training of algorithms by AI (Sweeney, 2013; Noble, 2013; Barr, 2015; Crawford, 2016).

Another recommendation is that collaboration between engineers and domain experts who are knowledgeable about historical inequalities, cultural and social areas of concern is important for future AI development (Sweeney, 2013).

Images Used:

Figure 1 — Angwin, J. (2016). Sample-COMPAS-Risk-Assessment-COMPAS-”CORE”. [online] Documentcloud.org. Available at: https://www.documentcloud.org/documents/2702103-Sample-Risk-Assessment-COMPAS-CORE.html [Accessed 16 Mar. 2018].

Figure 2–5 — Angwin, J., Larson, J., Mattu, S. & Kirchner, L. (2016) Machine Bias. Available at https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing [Accessed: 17th August 2017]

Figure 6 — Cammorata, K. (2013). Implicit Associations. [online] Kristinsrvenglish.blogspot.co.uk. Available at: http://kristinsrvenglish.blogspot.co.uk/2013/10/implicit-associations.html [Accessed 16 Mar. 2018].

Figure 7 — Bator, V. (2016). The Implicit Association Test and the Catch-22 of Developing Striking Tests. [online] Medium. Available at: https://medium.com/psyc-406-2016/the-implicit-association-test-and-the-catch-22-of-developing-striking-tests-3150a4631d7f [Accessed 16 Mar. 2018].

Figure 8 — Blattman, C. (2017). IPA’s weekly links — Chris Blattman. [online] Chris Blattman. Available at: https://chrisblattman.com/2017/01/13/ipas-weekly-links-95/ [Accessed 16 Mar. 2018].

References:

Anderson, M. & Anderson, S.L (2007). ‘Machine ethics: creating an ethical intelligent agent’, AI Magazine, 28(4), pp. 15–58.

Angwin, J., Larson, J., Mattu, S. & Kirchner, L. (2016) Machine Bias. Available at https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing [Accessed: 17th August 2017]

Azar, B (2008). ‘IAT: Fad or fabulous?’, Monitor on Psychology. 39, p. 44.

Barrett, D. (2014). Holder Cautions on Risk of Bias in Big Data Use in Criminal Justice. [online] WSJ. Available at: https://www.wsj.com/articles/u-s-attorney-general-cautions-on-risk-of-bias-in-big-data-use-in-criminal-justice-1406916606 [Accessed 6 Feb. 2018].

Barocas, S. and Selbst, A. D. (2016) ‘Big Data’s disparate impact’, California Law Review, 104, pp. 671–732.

Bass, D. and Huet, E. (2017). Researchers Combat Gender and Racial Bias in Artificial Intelligence. [online] Bloomberg.com. Available at: https://www.bloomberg.com/news/articles/2017-12-04/researchers-combat-gender-and-racial-bias-in-artificial-intelligence [Accessed 7 Feb. 2018].

Bashir, N. Y., Lockwood, P., Chasteen, A. L., Nadolny, D. & Noyes, I. (2013), The ironic impact of activists: Negative stereotypes reduce social change influence. Eur. J. Soc. Psychol., 43, pp. 614–626.

Barr, A. (2015). Google mistakenly tags black people as ‘gorillas,’ showing limits of algorithms. The New York Times.

Bolukbasi, T., Chang, K.W., Zou, J. Y., Saligrama, V. & Kalai, A. T. (2016) ‘Man is to computer programmer as woman is to homemaker? Debiasing word embeddings’, Advances in Neural Information Processing Systems, pp. 4349–4357.

Buolamwini, J. & Gebru, T. (2018). ‘Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification’, Proceedings of the 1st Conference on Fairness, Accountability and Transparency, in PMLR, 81, pp. 77–91.

Caliskan, A., Bryson, J. J. & Narayanan, A. (2017) ‘Semantics derived automatically from language corpora contain human-like biases’, Science, 356, pp. 183–186.

Carson, E. A. (2015). ‘Prisoners in 2014’, Washington, DC: Bureau of Justice Statistics. Available at http://www.bjs.gov/index.cfm?ty=pbdetail&iid=5387 (Accessed: 30 November 2017)

Crawford, K. (2016). Artificial intelligence’s white guy problem. The New York Times.

Devine, P. G. (1989). ‘Stereotypes and prejudice: Their automatic and IMPLICIT ASSOCIATION TEST 1479 controlled components’, Journal of Personality and Social Psychology, 56, pp. 5–18.

Fessler, L. (2017). We tested bots like Siri and Alexa to see who would stand up to sexual harassment. [online] Quartz. Available at: https://qz.com/911681/we-tested-apples-siri-amazon-echos-alexa-microsofts-cortana-and-googles-google-home-to-see-which-personal-assistant-bots-stand-up-for-themselves-in-the-face-of-sexual-harassment/ [Accessed 7 Feb. 2018].

Flores, A.W., Bechtel, K. & Lowenkamp, C.T. (2016) ‘False positives, false negatives, and false analyses: A rejoinder to machine bias: There’s software used across the country to predict future criminals. and it’s biased against blacks’, Fed. Probation, 80(38).

Garber, M. (2016). Is Criminality Predictable? Should It Be?. [online] The Atlantic. Available at: https://www.theatlantic.com/technology/archive/2016/06/when-algorithms-take-the-stand/489566/ [Accessed 7 Feb. 2018].

Garcia, M. (2016) ‘Racist in the machine: The disturbing implications of algorithmic bias’, World Policy Journal, 33(4), pp. 111–117.

Greenwald, A. G., & Banaji, M. R. (1995). ‘Implicit social cognition: Attitudes, self-esteem, and stereotypes’, Psychological Review, 102, pp. 4–27.

Greenwald, A. G., McGhee, D. E. & Schwartz, J. K. L. (1998). ‘Measuring individual differences in implicit cognition: The Implicit Association Test’. Journal of Personality and Social Psychology, 74, pp. 1464–1480.

Harcourt, Bernard E., (2010) ‘Risk as a Proxy for Race Criminology and Public Policy’, Forthcoming; University of Chicago Law & Economics Olin Working Paper №535; University of Chicago Public Law Working Paper №323.

Harvard Law Review (2017). State v. Loomis. [online] Harvardlawreview.org. Available at: https://harvardlawreview.org/2017/03/state-v-loomis/ [Accessed 7 Feb. 2018].

Hilton J.L & von Hippel, W. (1996). ‘Stereotypes’, Annu. Rev. Psychol, 47, pp. 237–71.

Justice.gov. (2014). Attorney General Holder: Justice Dept. to Collect Data on Stops, Arrests as Part of Effort to Curb Racial Bias in Criminal Justice System. [online] Available at: https://www.justice.gov/opa/pr/attorney-general-holder-justice-dept-collect-data-stops-arrests-part-effort-curb-racial-bias [Accessed 6 Feb. 2018].

Kirchner, L. (2016). Wisconsin Court: Warning Labels Are Needed for Scores Rating Defendants’ Risk of Future Crime — ProPublica. [online] ProPublica. Available at: https://www.propublica.org/article/wisconsin-court-warning-labels-needed-scores-rating-risk-future-crime [Accessed 16 Mar. 2018].

Kehl, D., Guo, P. & Kessler, S. (2017) Algorithms in the Criminal Justice System: Assessing the Use of Risk Assessments in Sentencing. Responsive Communities Initiative, Berkman Klein Center for Internet & Society, Harvard
Law School.

Liptak, A. (2017). Sent to Prison by a Software Program’s Secret Algorithms. [online] Nytimes.com. Available at: https://www.nytimes.com/2017/05/01/us/politics/sent-to-prison-by-a-software-programs-secret-algorithms.html [Accessed 6 Feb. 2018].

Kirkpatrick, K. (2016) ‘Battling algorithmic bias’, Communications of the ACM, 59(10), pp. 16–17.

Kobayashi, M. & Takeda, K. (2000). ‘Information retrieval on the web’, ACM Computing Surveys. ACM Press. 32 (2), pp. 144–173.

Kranzberg, M. (1986). ‘Technology and History: “Kranzberg’s Laws’, Technology and Culture. 27 (3), pp. 544–560.

Macfarlane, T. (2013). Extracting semantics from the enron corpus.

Nalisnick, E., Mitra, B., Craswell, N. & Caruana, R. (2016) ‘Improving document ranking with dual word embeddings’, WWW.

Noble, S. U. (2013). ‘Google search: Hyper-visibility as a means of rendering black women and girls invisible’, InVisible Culture, 19.

Oswald, M. & Grace, J. (2016). Norman stanley fletcher and the case of the proprietary algorithmic risk assessment. Policing Insight.

Pedreschi, D., Ruggieri, S., & Turini, F. (2008) ‘Discrimination-aware data mining’, Proceedings of KDD 2008.

Pennington, J., Socher, R., and Manning, C. D. (2014). ‘Glove: Global vectors for word representation’, EMNLP, 14, pp. 1532–1543.

Rothermund, K. & Wentura, D. (2004). ‘Underlying processes in the Implicit Association Test(IAT): Dissociating salience from associations’, Journal of Experimental Psychology: General. 133, pp. 139–165.

Sacks, M. (2015) ‘Competition Between Open Source and Proprietary Software: Strategies for Survival’, Journal of Management Information Systems, 32(3), pp. 268–295.

Supreme Court of Wisconsin (2016). State of Wisconsin, Plaintiff-Respondent, v. Eric L. Loomis, Defendant-Appellant.. [online] Wicourts.gov. Available at: https://www.wicourts.gov/sc/opinion/DisplayDocument.pdf?content=pdf&seqNo=171690 [Accessed 16 Mar. 2018].

Sweeney, L. (2013) ‘Discrimination in Online Ad Delivery’, ACM Queue, 11(3), p. 10.

Tosik, M., Hansen, C.L., Goossen, G. & Rotaru, M. (2015) ‘Word embeddings vs word types for sequence labeling: the curious case of cv parsing’, Proceedings of NAACL-HLT, pp. 123–128.

Wagner, P. & Walsh, A. (2016). States of incarceration: The global context. Prison Policy Initiative. Available at: http://www.prisonpolicy.org/global/2016.html [Accessed 1 Jan. 2018]

Women’s Engineering Society (2018). Useful Statistics | Women’s Engineering Society. [online] Wes.org.uk. Available at: http://www.wes.org.uk/content/wesstatistics [Accessed 10 Feb. 2018].

Zhu, K.X. & Zhou, Z. Z. (2011). ‘Lock-in Strategy in Software Competition: Open-Source Software vs. Proprietary Software’, Information Systems Research.

--

--