Mini Literature Review: Racial and Other Biases in Machine Learning Praxis, and how to Combat them

Machine learning is often considered a field still in its youth, despite roots in established sciences, and if that’s the case, ML fairness and ethics is still in its infancy, despite similar historical resonances. Yet, the increasing use of ML in everyday computing makes it a salient and critical area to review for its biases in race, gender, class and other areas that affect human lives, sometimes in invisible ways.

BIAS PROBLEMS

In Industry

Scholars identify a multitude of industries that use ML, and often struggle with fairness. Four are most commonly cited.

The first of these is criminal justice, where statistical risk assessments are used (Davies), recidivism is modeled (Gebru), and algorithms are used for targeting people or populations by police (Garcia). There is a profound question of fairness in ML in criminal justice (Benthall), and particular concern about race neutrality of sentencing models (Lee).

The second area is healthcare prediction (Schlesinger), also known as health diagnostics (Gebru, Mullainathan). This is another area where statistical risk assessments are being invested in (Davies), with a particular potential to change healthcare (Mullainathan).

The third area is in banking (Davies), particularly for approving loans (Garcia). Scholars note concern about ML bias in credit scores (Davies) and credit reporting (Benthall).

Fourth, ML and AI are expanding in the areas of surveillance (Gebru) and the related area of facial recognition, where scholars point out its potentially dangerous unregulated use in the US (Gebru) and even its spread to sectors like employment (Gebru).

There are obviously many more sectors being affected and expected to be affected, many of whom face headwinds in ML fairness; these include employment (Benthall), education (Benthall), advertising (Benthall), and child welfare services (Davies).

Because of the high consensus, this suggests that more scrutiny and resources might be well placed in the areas of ML bias in criminal justice, healthcare, banking, and facial recognition, while broad awareness should spread to other related sectors.

Examples of ML Unfairness

There is an overwhelming list of examples of bias in ML, across a wide spectrum of problem areas.

Race

A widely cited example is the incident in computer vision where African Americans were automatically tagged as gorillas (Lee, Garcia, Gebru). Even where the damage of bias isn’t as explicit, subtler forms of bias persist, such as high error rates of face recognition for dark- skinned women (Gebru) and generally higher error rates on facial recognition for non-Caucasians (Wang). A final example in computer vision is the automatic application of skin lightening tones in photo editing software (Lee).

Another troubling area of racial bias ties back to criminal justice. For African Americans in particular, who are already more likely to be stopped by law enforcement (Wang) and who face a discrepancy between committed vs reported crime (Gebru), they face further bias with automated decision-making in criminal sentencing (Benthall). ML tools for recidivism are also biased against them (Gebru), with higher false positives for black people than white (Benthall, Davies). With continued use, predictive policing for drug enforcement perpetuates different enforcement (Kallus) and feeds a loop of problematic feedback in crime models (Gebru).

In advertising, it’s been demonstrated by advertising by racial distinctions through ethnic affinity leads to discrimination for housing (Benthall), meaning that even by proxy, advertising is done by race and is biasing (Lee).

Even in naming, it’s been found that natural language processes names with negative bias for African Americans (Zou), black-sounding names are falsely associated with arrests (Lee), and that discriminatory ads are shown to people with black-sounding names (Schlesinger).

In medicine, there are troubling biases against female or black pain, and against cancer treatment for minorities (Mullainathan), even as it’s known that black people don’t get adequate treatment for pain in the US (Schlesinger).

Gender

Gender bias in ML is another area of concern, particularly in professional realms. Scholars found that automated hiring tools are biased against women (Gebru), high-income jobs are shown far more to men than women (Garcia), searching for CEO shows far fewer women than representative (Garcia), and that algorithms make wrongful associations about gender and occupation (Lee). Additionally, computerized applicant screenings reflect biases carried over from old processes (Garcia), making historical biases harder to overcome.

Further problems with gender bias include biased targeted advertising by gender (Gebru), virtual assistants being shown as female (Gebru), translation programs getting female pronouns wrong (Zou), and natural language processing on newspapers showing biases against women (Gebru).

Mixed or General Bias

In the realm of language, an interesting example lies in the bot Tay, who went from tweeting positive to bigoted messages in 24 hours (Garcia, Schlesinger). Other bots like Zo were tested to show that they knew many white cultural references but not many black ones (Schlesinger), amplifying bias.

Other examples of language bias include a translation mistake that led to arresting an innocent because the tool favors languages deemed important, which didn’t include the user’s language (Gebru), word embeddings training leading to gender-biased analogies and race-biased associations (Gebru), and sentiment analysis classifying LGBTQ+ words as negative (Gebru).

Many more examples exist, such as biased associations about race and positivity (Lee), customers of rideshare and home-share companies being rejections based on race (Lee), software interpreting Asians blinking incorrectly (Zou), minorities on bank sites being directed to apply for credit cards with higher interest (Garcia), discriminatory redlining for rolling out premium services in certain areas (Schlesinger), the inability of voice assistants to help women in emergency situations (Garcia), and partnering between ICE with tech to monitor social media and determine if people should be allowed to immigrate (Gebru).

Overall, there’s a lot of spread on the list of grievances, and many more can be found by looking outside the most prominent examples. Though these examples may be dismaying at best and harrowing at worst, they highlight the need for solutions, which will be covered in the following section. The mishaps of some of the first AI pioneers help technicians, politicians, and the general public know what to look for and hold innovators accountable for fairness, which benefits society at large. They also call to mind the potential for invisible bias, which is more insidious for its difficulty of detection. The hope is that solutions address both visible and invisible bias.

BIAS SOLUTIONS

Technical

Dataset

Within the realm of technical solutioning, scholars point out that there are de-biasing opportunities within just dataset creation, selection, and management, even before entering into ML.

A key consideration going into dataset creation is striving for better measurement (Mullainathan) to offset errors. The recording information needs to be accurate to allow placing constraints (Zou) later on in ML modeling.

With ready-made datasets, there’s a recommendation that rather than taking racial stats at face value, analyzing them with rigor (Benthall) can reduce bias. Additionally, there’s a recommendation to look at whether inputs and outputs are racialized explicitly through ascription, explicitly through self-identification, or not (Benthall), with the idea of better seeing the context of the dataset.

Overall, many scholars find that unbiased datasets aren’t available. There’s a need for more diverse datasets (Zou), creating databases with wide variety of race talk (Schlesinger), referencing databases with wide variety of cultures (Schlesinger), a need for databases in many languages (Schlesinger), a need for data work in dialects (Schlesinger), and work on color diverse images (Schlesinger). In one case, scholars did mitigation through building a dataset based on population ratio of ethnicity and balancing samples in ethnicity (Wang). These solutions center around acquiring and building up new datasets.

However, there’s another school of thought that working with the data that exists is key (Schlesinger), potentially because it’s often difficult to source data easily and in a cost-efficient way so there may be no alternative in many cases.

Another suggestion lies in using the best datasets for validation (Mullainathan), thereby training with second-quality data to save resources.

Outside of datasets themselves, a school of thought advocates for annotating data in order to not reify race (Benthall), which can give clues to political motivations of designers and data providers (Benthall). This can be done though standardized metadata measures like datasheets (Zou).

Overall, there’s a push to consider both input and output data (Benthall) as part of bias considerations.

Algorithmic

Moving into the modeling portion of ML, a variety of algorithmic solutions are proposed. Some scholars point to three approaches to fairness (Lee) or three mathematical definitions of fairness (Benthall).

Some of these may include anti-classification (Davies), classification parity (Davies), calibration (Davies), unbalanced-training (Wang), attribute suppression (Wang), and domain adaptation (Wang).

Beyond established methods, scholars suggest bespoke approaches.

One idea involves unsupervised learning to detect patterns of segregation to mitigate root causes without further establishing categories of disadvantage (Benthall). In this case, categories reflecting past segregation can be inferred through unsupervised learning and used in fairness modifiers (Benthall).

Another set of ideas involves equitable performance across subpopulations (Zou), reducing dependence on sensitive attributes (Zou), and using ML to quantify bias in an AI audit (Zou).

Yet another involves establishing and managing threshold policy (Davies).

Another deals with benchmarks, in which sample reweighting can adjust fairness while accounting for censoring (Kallus).

Finally, an idea proposes an adaptive margin for balanced performance of races based on large margin losses (Wang), which mitigates bias and makes performance more balanced (Wang). The idea is that combining balanced training and debiased algorithm makes the fairest performance (Wang).

Beyond classical ML, there is opportunity for more complex styles to consider bias, for example, making neural nets more tunable (Schlesinger).

Of all the technical-mathematical approaches, there is healthy skepticism of established models and innovation, but it seems, still not much consensus, suggesting the field is ripe for growth.

Socio-technical

That brings us to the more socio-technical solutions. In particular, there’s a well-supported school of thought around same treatment practices. This involves grouping people with similar attributes like credit score (Garcia), and after making risk scores for individuals similarly risky ones are treated similarly (Davies). Scholars find that it’s preferable to treat risky people similarly (Davies), for the reasons that disparate impacts don’t apply (Benthall), people would otherwise be segregated (Benthall), and it aligns with policy objectives (Davies).

However, this opinion is not universally held. It’s often necessary for algorithms to consider protected characteristics, for example, women, who display lower recidivism rates independent of other traits (Davies).

Overall, socio-technical concerns also recommend to consider how ML fairness encodes notions of compromise (Hutchinson), how to reconciling machine-internal vs machine-external contexts (Schlesinger), how to better study problem-picking, when outputs look right, and evaluating contributions of ML (Schlesinger), and breaking down the problem into chunks, for instance, considering instead bots with specialized expertise with differing abilities to handle in situ difficulties in an ensemble as opposed to a one-size-fits-all solution (Schlesinger).

Legal

Another realm of anti-bias lies in legal constraints.

Scholars point out that there are several attempts to address legal fairness through a Congress advisory committee (Lee), an NYC legislation to review algorithms (Lee), and some counties like Allegheny county who have built their own algorithms and could tune them for non-bias (Lee).

More broadly speaking, scholars highlight that general rules outside of AI fairness could have positive impact on non-bias, such as EU laws allowing users to ask why decisions were made (Garcia), GDPR barring profiling on data (Garcia), and EU citizens having a right to be forgotten (Garcia). More power to users generally allows systems to protect against biases that hurt them.

Accordingly, scholars call for more robust legal protections, with the idea that policymakers need to institute protections (Lee), with a particular call for regulation on law enforcement use of facial analysis (Gebru), and particularly in the US which has fewer protections than the EU (Garcia).

Though lawmakers are beginning to pay attention to ML fairness, much more work and oversight needs to be done on regulation.

Commercial

In the commercial sphere, scholars point to a need for increasing diversity in tech (Lee), saying that tech companies should strive to be more diverse (Lee), and that hiring diverse people decreases the likelihood of bias (Garcia) and that automated tools created by people with diverse backgrounds (Gebru) are less biased.

Though there is a lot of consensus that diversity is an important goal to strive for, there are few guidelines about how to achieve this functionally. In addition, there are potentially problematic aftereffects around supporting diverse workers once hired, the possibility of tokenizing and differential treatment of diverse workers, and the idea of a diversity tax where individuals are expected to educate their peers and do outreach work in addition to their stated jobs.

Social

Finally, scholars converge on the broadest area of anti-bias opportunity, social reforms.

Collaboration

There is a great deal of interest around more collaboration. Some attempts have been made to achieve this through the establishment of fairness organizations (Gebru), such as the FAT ML workshop established for bias in bail decisions and bias in journalism (Garcia).

The work is far from complete though. Scholars call for the need to forge collaborations across disciplines (Gebru), with ML researchers engaging with other professionals in concert (Zou) to tackle the interdisciplinary concerns of fairness.

Frameworks/Guidelines

Similar to groups of people who work on fairness, scholars advocate for written guidelines on fairness, for example, a framework that combine CS, law, and ethics (Garcia), drafting guiding principles for companies to use that demonstrate non-bias (Garcia), and a guide for which systems to use in which cases (Gebru). Overall, there’s an interest in holding ML to the same standard as other diagnostic and professional tools (Mullainathan).

Moderation

Additionally, there are some proposals to address fairness through auditing and stopping bias (Garcia) or policing decisions after the fact, such as through moderators (Garcia).

Culture

Lastly, scholars point to the need to change the culture around ML fairness, with a need to develop more culturally responsive and responsible programs (Schlesinger), better understanding factors that disadvantage people subject to tools (Gebru), elevating voices of communities (Gebru), not group memberships in ML but correcting for them (Benthall), trying to stay with the trouble instead of turning away (Schlesinger), and similarly, training AI to handle sensitive topics well, not avoid them (Schlesinger).

Overall, the diverse spread of solutions speaks to the creativity being applied to the wicked problem of bias in ML and AI. There can be no clean solution, and it’s incumbent upon technologists, policymakers, ethicists, and even users to be aware of and demand anti-bias protections. One big takeaway from the solutions set is that some of the best anti-bias measures are likely to be multi-prong, mixed methods that combine some form of mathematical with more social change.

Hopefully, the wealth of possibilities can inspire us to work towards fairness, instead of turning away from the problem in intimidation. Let’s look to AI as a child who needs parenting, not a threat that needs neutralizing. That way, we can make technological progress while still addressing the key human issues of bias and discrimination.

References

Benthall, S., & Haynes, B. D. (2019). Racial categories in machine learning. Proceedings of the Conference on Fairness, Accountability, and Transparency — FAT* 19. doi:10.1145/3287560.3287575

Corbett-Davies, S., & Goel, S. (2018). The Measure and Mismeasure of Fairness: A Critical Review of Fair Machine Learning. arXiv:1808.00023

Hutchinson, B., & Mitchell, M. (2019). 50 Years of Test (Un)fairness. Proceedings of the Conference on Fairness, Accountability, and Transparency — FAT* 19. doi:10.1145/3287560.3287600

Kallus, N., & Zhou, A. (2018). Residual Unfairness in Fair Machine Learning from Prejudiced Data. arXiv:1806.02887

Garcia, M. (2016). Racist in the Machine. World Policy Journal, 33(4), 111–117. https://doi.org/10.1215/07402775-3813015

Gebru, T. (2020). Race and Gender. The Oxford Handbook of Ethics of AI, 251–269. doi:10.1093/oxfordhb/9780190067397.013.16

Mullainathan, S., & Obermeyer, Z. (2017). Does Machine Learning Automate Moral Hazard and Error? American Economic Review, 107(5), 476–480. https://doi.org/10.1257/aer.p20171084

Schlesinger, A., Ohara, K. P., & Taylor, A. S. (2018). Lets Talk About Race. Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems — CHI 18. doi:10.1145/3173574.3173889

Turner Lee, N. (2018). Detecting racial bias in algorithms and machine learning. Journal of Information, Communication and Ethics in Society, 16(3), 252–260. https://doi.org/10.1108/jices-06-2018-0056

Wang, M. & Deng, W. (2020). Mitigating Bias in Face Recognition Using Skewness-Aware Reinforcement Learning. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 9322–9331

Zou, J., & Schiebinger, L. (2018). AI can be sexist and racist — it’s time to make it fair. Nature, 559(7714), 324–326. https://doi.org/10.1038/d41586-018-05707-8

Written by

Maker, giver, learner, and all-around nerd. UX researcher and strategist with a background in HCI and psychology. Currently @ Microsoft

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store