The Out of Rhythm Algorithm

Nikki Edmunds-Ekwueme
8 min readJun 26, 2020

--

I’ve been musing the past couple of weeks on how to start this post. Amidst the racial turmoil and unrest that may be new to some, but isn’t new to black people. Which tone I should take? How I should use my newly minted platform to speak about racism? Blacks have been fighting their own revolutionary war for decades, across all industries and spaces, and it’s about time we gained our freedom, legitimately this time. I established this platform to self educate, to develop my legal knowledge and analytical skills. But I also want to use it share a part of my personal self when appropriate.

As a legal tech novice, I’ve already learned so much on this journey. Some good, some bad. This post will focus on the bad. On how tech perpetuates discriminatory practices. On how tech enables racism. On how there is a lack of diversity and parity in tech spaces. This is where my somewhat newfound passion for legal tech comes from. The urge to break those barriers and help spur much needed representation in an industry dominated by white and Asian males. I may not be well-versed in data science, computing, coding, cryptology, and the likes, but what I do know is there aren’t enough people that look like me in those tech labs or in those big cushy office chairs sitting behind their CLO name plates.

We need to seek out ways to combat and prevent algorithmic biases. From the inside. Racism in technology may seem a fallacy to some. To those who don’t suffer the side effects. To those who benefit from it. But trust me when I say they definitely exist. With the incessant rise of artificial intelligence (AI), it’s up to those technicians behind the scenes to diversify their teams or educate themselves on how their technology lacks inclusivity. Machines are created by human beings and encoded into them are the ignorance and implicit biases of its creators. Creators that have failed to implement racial oversight.

So How is AI Constructed?

Products or software forged from AI gain their ability to discern from a process called “machine learning”, which is the method of training a computer to make independent decisions based off the knowledge it has been fed. This requires the software’s creator, a human being, to supply a bunch of data to the computer to help it makes its own predictions and judgements based on any patterns it notices as it processes the data it’s fed. The issue is that machine learning creators tend to be homogenous groups of white males, who fail to include composite data in their configurations. The data is usually unrepresentative, which yields the algorithmic biases we see today.

Essentially, machine learning data sets are based on data fed to it by human beings, and it’s undeniable that human beings are inherently biased. Generalization is a core part of how machines learn and we all know how insidious generalizations are. How do you think stereotypes of black people have been perpetrated through the years? It’s these generalizations that have us murdered unjustly.

How Does AI Affect People of Color?

General Face Recognition Use

Take facial recognition AI as the primary example. Women of darker complexion are often unidentifiable by facial recognition devices, compared to white men, who are almost always identified accurately. Joy Buolamwini, the founder of the Algorithmic Justice League, and coiner of the phrase “coded gaze”, conducted an experiment with a facial analysis device where she had to put on a white mask in order for the software to recognize her face. To construct a facial recognition software or device, one creates a machine learning data set with a collection of faces. And whose faces are usually included in those data sets? You guessed it. The faces of fair skinned individuals. Which I reiterate is the issue, the scientists and researchers creating these sets are not diverse or using diverse data sets.

To add insult to injury, another incident in 2015 revealed a mishap by Google images when its software mislabeled an African-American man as a gorilla due to an error in its machine learning capabilities.

Predictive Policing

Policing is another example of AI’s biases. To illustrate, a number of U.S. cities have implemented PredPol, a predictive policing software that identifies where and when a crime is most likely to occur, promising police the ability to effectively allocate their resources and prevent crime. According to them, their mission is to “help law enforcement keep communities safer by reducing victimization”. Right.

This is what they claim on their website: “The data we use for our predictions is very important. We make our predictions based only on victimization information, i.e. crimes that have been reported to police. This information is anonymized; no personally identifiable information is ever collected or used. We believe that protecting the privacy and civil rights of the residents of our communities is as important as protecting them from crime.”

In the past, Predpol has advocated for a method of policing called “broken windows” policing, which promotes heavily punishing petty crimes in an effort to encourage reductions in widespread crime. A couple of issues result from this antiquated and unjust policy, in addition to potential privacy concerns resulting from the lack of data PredPol fails to release via public record. PredPol’s focus on areas where crimes have already been reported yields a high probability of impartiality directed at those specific areas. In fact, the bias is inevitable. Of course, directing policing towards areas with reports of high crime is beneficial to consolidate policing efforts, but it also poses a huge threat because the data will be biased based on how policing has been carried out historically. As we know, heavy policing targets areas where police believe there are large amounts of crime, no matter how minor the violations are, and we know this usually occurs in communities of color. Consequently, the technology will redirect policing to certain areas that are already overpoliced, perpetuating the racial injustices we witness way too often. I no longer need to provide statistics that evidence the comparisons between white and black victims of police shootings or police brutality. We all should know that black men and women, children and adults, are killed at a much higher percentage by police than their white counterparts and many studies show that.

Regarding the privacy issue, PredPol preserves the data they acquire on servers managed by a third party and this data isn’t disclosed to the public, which is an egregious lack of transparency. They should release this data for accountability purposes. The police departments oversee the policing data that’s in their possession, but it’s unknown what control they have over the data under PredPol’s purview. So one must ask whether the police departments PredPol contracts with have complete authority over that data or whether they must approve PredPol’s ability to release it.

Further, the NYPD also employed a similar video analytics software created by tech conglomerate, IBM, to surveil New Yorkers that would connect those surveilled to committed crimes based on their skin tone. IBM acquired footage from the NYPD’s surveillance, and created features that would allow police to search camera footage for images of people by hair color, facial hair, and skin tone, unbeknownst to civilians. The NYPD phased out the program in 2016. Read more about this operation here.

Employment

Job recruiters have also relied on AI to help them sift through hoards of resumes and recorded interviews of potential employees. Once again, this practice tends to hurt people of color and also women because of machine learning. HireVue, a popular video interviewing software, uses AI to scan job applicants on recorded video interviews. Where the data sets are based on the previous preferences of hiring managers, which skew toward white males, minorities and women have a lower chance of getting chosen to progress forward. The intention behind the technology’s creation is sensical in that it promotes recruitment efficiency. So the problem isn’t with the technology’s inception, it’s with the data sets used for machine learning that propagate the biases we continue to see.

Regulations Addressing Racial Discrepancies in AI

Private and public industries in AI must take precautions and steps to implement heterogeneous training data, as well as regularly audit their systems for unintended biases and disparate impacts against certain groups. In 2019, a coalition of House and Senate Democrats proposed the Algorithmic Accountability Act of 2019, which requests the Federal Trade Commission (FTC) to require entities that use, store, or share personal information to conduct automated decision system and data protection impact assessments. Basically to conduct routine audits. It would apply to entities that are subject to the FTC’s jurisdiction and that make more than $50 million per year, possess or control personal information on at least one million people or devices, or primarily act as a data broker that buys and sells consumer data.

Section 374 of the Justice in Policing Act of 2020, resulting after George Floyd’s death, briefly addresses facial recognition technology as well. The police reform legislation, proposed last week by Senate Democrats, states that body cameras shall not be used to gather intelligence information, nor shall they employ facial recognition technology. Presumably due to their inclination for biases. There are also some exceptions mentioned in the bill.

Where My Generation of Lawyers Come In

To state it blankly, we need more black people in the tech sector. More specifically, in C-suites, as in-house counsel, and obviously in the labs creating this software. AI and the greater tech industry clearly under represent minorities, serving as a poignant reminder that there will always be a swath of representation for white people. As the inception of new AI technologies are deployed, black people and other minorities need to be at the forefront and backend of such deployment. We need more transparency in training data for machine learning, which can be handled by internal privacy counsel working on disclosures and compliance policies and initiatives. We need researchers and technicians of color working on algorithms in all technology labs, not just AI. We need compliance executives and CLOs of color representing startups and tech companies releasing biased products. We really need all of these things and so much more. Yes, AI can’t be 100% free of bias, but we can work to ensure that we get as close to 100% as feasibly possible.

Check out this upcoming webinar on cultural intelligence and diversity in AI.

Read some interesting articles on this topic:

1. Read this for a general overview on how AI is inherently biased.

2. Watch this webinar about black venture capitalists’ experiences in the tech industry.

3. Read here to see how women are severely underrepresented in the tech industry.

4. Read here about how 1960s tech company, Simulatica, developed AI that helped Kennedy win the election while also creating biases by monitoring communities of color during the 1967 riots, which influenced criminal justice information systems today.

  • Food for thought regarding #4: Could contact tracing (read my article on that here) lead to another method of tracking/policing people of color since it has been shown that COVID-19 disproportionately affects communities of color?

*Imported from thetippblog.com

--

--