3 Major Responsible AI Incidents that Data Scientists Can Learn From

Emily Hadley
RTI Center for Data Science and AI
4 min readNov 21, 2022
Photo Credit: Michael Dziedzic on Unsplash

Reflecting on mistakes of the past can help prevent them from occurring in the future. Harmful AI incidents can provide useful insight for data scientists and other stakeholders. This blog post reflects a small sampling of major ethics and bias incidents that are well known in the responsible AI space.

Photo Credit: Hush Naidoo Jade on Unsplash

#1 Bias against Black individuals in health care algorithm

This famous study, published in Science in 2019, uncovered a racially biased algorithm that was in active use by hospitals and insurers to manage care for more than 200 million people. The algorithm systematically discriminated against Black individuals with chronic conditions by referring them to programs intended to improve care less often than White people who were equally sick. Researchers found that with the biased algorithm, 17.7% of patients assigned to receive extra care were Black while with an unbiased algorithm, this proportion increased to 46.5%.

Key Learning Moment: This algorithm initially used a “fairness through unawareness” justification, with developers arguing that the algorithm must be fair because there were no race attributes used. In other words, the algorithm was “unaware” of race. However, the algorithm assigned risk scores based on total health care costs accrued in one year, and these costs were $1800 less in a Black person with complex chronic conditions than in a White person with the same conditions.

Takeaway for Data Scientists: It is not enough to assume an algorithm is fair simply because there are no sensitive attributes like race in the model. Data scientists must deeply interrogate all attributes for possible correlations with sensitive attributes and investigate outcomes using fairness metrics.

Photo Credit: Tobias Tullius on Unsplash

#2 Faulty facial recognition has contributed to multiple false arrests

In a detailed piece published in March 2022, Wired explored three false arrests prompted by faulty facial recognition technologies. In one case, a man with substantial tattoos was identified by facial recognition and falsely arrested, even though the person in the image had no tattoos. Although these individuals were cleared of all charges, they reported emotional and economic distress for themselves and their families as a result of the arrest. At the time of publishing this post, law enforcement is generally not required to disclose when facial recognition is used for arrest. A newly proposed national bill would limit the use of facial recognition as the sole basis for establishing probable cause, and require annual testing and auditing of facial recognition systems.

Key Learning Moment: Various sources have demonstrated that facial recognition can be inaccurate for particular groups, especially people of color and individuals who do not conform to traditional gender norms. A major source of these inaccuracies is flawed training data and lack of impact assessment.

Takeaway for Data Scientists: Faulty tools have real consequences on individual lives. Data scientists should seek representative datasets and participate in impact assessments to address discrepancies and potential flaws in AI technologies before they are deployed. Data scientists can partner with regulators to create regulatory frameworks that build trust in their systems and provide opportunities for remediation and appeals for those harmed by AI technologies.

Photo Credit: Austin Distel on Unsplash

#3 Discriminatory advertisements in employment and housing

A 2017 article co-published by ProPublica and the New York Times uncovered that Facebook allowed dozens of companies to exclude older workers from job ads, which violates the Discrimination in Employment Act of 1967. A second ProPublic investigation in 2016 found that Facebook allowed for exclusion of ads by those who had an “an affinity” for African-American, Hispanic or Asian people, a likely violation of the Fair Housing Act of 1968. Although Facebook (now Meta) settled with the Department of Justice in 2022 and has since changed its practices , these tools were still developed and deployed and may have caused affected individuals to unjustly miss out on opportunities for housing or employment.

Key Learning Moment: Numerous calls have been made for better regulation of tech companies and AI development in the US. New laws will are being developed, but there are also already laws in place related to certain protected classes and activities. These laws are intended to protect individuals from harm, and tech companies should demonstrate greater regard for this legislation.

Takeaway for Data Scientists: Although data scientists are not lawyers and should not be expected to know all laws, it is still important for data scientists to familiarize themselves with constitutional and civil rights protections and feel empowered to speak up when they observe law-breaking behavior in development and deployment of AI technology.

Interested in More?

Check out the AI Incident Database for more examples of AI incidents and to contribute incidents you have heard about.

This blog post is part of a Deep Dive into Responsible Data Science and AI series.

Disclaimer: Support for this blog series was provided by RTI International. The opinions expressed by the author are their own and do not represent the position or belief of RTI International. Material in this blog post series may be used for educational purposes. All other uses including reprinting, modifying, and publishing must obtain written consent.

--

--

Emily Hadley
RTI Center for Data Science and AI

Data Scientist | Enthusiastic about data, nature, and life in general