Artificial Intelligence Impact on Communities of Color

NHMC
4 min readNov 5, 2018

--

By Marianna Elvira

Artificial Intelligence (AI) is having significant effects on communities of color, whether with voice recognition, facial recognition and surveillance, or predictive policing. Technology has been a predominantly white field where bias can and does creep into algorithms created by human beings. To prevent such bias, people of color have to be brought into the room and be a part of coding and creating these algorithms. There must be increased calls for transparency and oversight to ensure AI is not abused, as it easily could be.

AI is trained using the images of people similar to those doing the coding, white men. The great lack of diversity in companies creating AI is one reason for this uniformity. Benchmark datasets reflect the same issue. For example, people with non-native English accents have difficulty being understood because the data used to train voice recognition systems is biased. Switchboard is a commonly used repository of voice data, and its data is based on white participants with Midwestern accents in Philadelphia. Each set of tools created is built on foundations like this that can perpetuate inequality.

Joy Buolamwini, a PhD student at MIT’s Media Lab and the founder of the Algorithmic Justice League, discovered in her Gender Shades project, the “inadvertent negligence” that is exacerbating inequality. She looked at how IBM, Microsoft, and Face++ AI services recognize faces of different skin tones and genders using 1,270 images. All three companies were more accurate at recognizing men and people with lighter skin tones. Conversely, all three performed the worst with darker skinned women, with one in three women of color not being recognized by the AI. The project found that “automated systems are not inherently neutral. They reflect the priorities, preferences, and prejudices — the coded gaze — of those who have the power to mold artificial intelligence.”

Most notably, AI is being used to predict crime and how much of a future risk people who have been convicted of crimes are. AI is ripe for abuse among law enforcement if left unregulated. There is a long, documented history of racial discrimination with some law enforcement routinely and systematically violating human and constitutional rights. Body-cameras are meant to help with transparency, but without proper oversight and regulation, they can instead be used as surveillance tools with facial recognition in heavily policed communities.

AI is used in other facets of the criminal justice system, like when deciding whether to allow bail and to release someone. When making this decision, a person’s risk of recidivism is a significant factor. This has often been a subjective decision made by a judge, leaving room for bias. AI could be used to assess that someone is a low flight risk by removing some of the implicit bias a judge may have and increasing the efficiency with which such assessments are made. The unfortunate reality is that the AI will have aggregated the biases of those who created it and the data it was trained with.

ProPublica used an algorithm by Northpointe, one of the most widely used risk assessment tools in the country to study its effects. After looking at 7,000 people who had been arrested in Broward County, Florida, only twenty percent of those predicted to commit violent crimes actually went on to do so. The algorithm was more likely to falsely flag black defendants as future criminals at a rate double that of white defendants. White defendants were mislabeled as low risk more often than their black counterparts. When the researchers ran a test controlling for race, black defendants were seventy-seven percent more likely to be flagged at a high risk of committing a violent crime and forty-five percent more likely to be falsely flagged as reoffending at all.

Predictive analytics can determine who is hired, granted a loan, or how long someone stays in prison. I have discussed the inaccuracy and biases with facial recognition, and this technology will never be perfect. Instead, such software could end up misidentifying innocent civilians as suspects, chill free speech at protests, and prime officers to perceive certain people as more dangerous than they are, which can result in officers using more force than the situation requires.

But AI can be used to do incredible things as well. The group Data for Black Lives is composed of “activists, organizers, and mathematicians committed to the mission of using data science to create concrete and measurable change in the lives of Black people.” These algorithms can give information that will empower communities of color to fight bias, build movements, and promote civic engagement.

We must demand inclusion, transparency, and accountability to ensure these systems do not perpetuate historic inequalities and our own unconscious biases. Such technology can be abused by authoritarian governments, predatory companies, and the criminal justice system. Companies creating AI must bring people of color to work on AI and get the input of those affected by AI, such as people who have been arrested and people in heavily policed communities, to ensure there is proper oversight and regulation.

Google’s image labeling technology at one time was classifying Black people as gorillas and this has not been fixed. Instead, Google removed gorillas from its algorithm rather than having its software trained and tested with and by people of color.

--

--

NHMC

Media advocacy/civil rights org. for the advancement of Latinos, working towards a media that’s fair & inclusive, & for universal/affordable/open communications