Racial Injustice + AI: Facial Scanning at Protests

Lauren Zaidel
HackGuild
Published in
5 min readSep 9, 2020

By: Lauren Zaidel

I’d like to discuss something very important to the current state of our country. Our country has been ravaged with change — today’s generation has been calling out to end racial injustice. As you all may know, the Black Lives Matter movement has been ongoing for many years. However, as of recent times, and the killing of George Floyd and many other innocent black lives, protests have been breaking out across the country to signify the true disappointment and desire for change that our people have with the current criminal justice and policing system. Genocide of Muslims in China, antisemitism across the United States, and normalized racism towards asians with the guise of the global pandemic originating in China. I could name countless examples. The question may be how racial injustice and Black Lives Matter relates to AI, and how technology has anything to do with social activism. Let’s delve deeper and discover the intersection of racial advocacy and artificial intelligence.

We’re going to be discussing a few of the effects AI currently has on racial advocacy and its outputs. The criminal justice system has been recently exposed to be increasingly flawed, but in particular, the trial process and prison system is “coded” to put minorities at a disadvantage. AI has more pull on this sector than we may think, including identification, tracking, and policing automation. During this crisis, we’ve recently seen many protests arise across the country, mostly targeted around BLM. However, what many people don’t realize is that attending these protests puts an individual more at risk of getting falsely accused or imprisoned, all at the hands of the corrupt criminal justice system this country harbors. And finally, as a prisoner, accused, or a protester, the healthcare system is absolutely necessary to ensure basic human safety we should all be guaranteed by expressing our opinions and exercising our first amendment rights. However, the healthcare system comes with its own data management systems, which, as you might have guessed, goes hand-in-hand with every corruption issue we’ll see in the criminal justice system. Ready for 20 minutes of shock and confusion at how AI truly has a hold on our political opinions and human rights? Neither am I. Stay tuned for more…

Along with the rapidly growing fire beneath the Black Lives Matter movement, comes both violent and nonviolent protests as well as public displays of dissent throughout our country. We’ve come to realize as a whole that trying to prevent these protests is only going to make these issues worse. The government needs to listen to what the citizens have to say, and our voices will be heard.

By the Constitution, it’s our right to free speech. You’re allowed to assemble and protest in any lawful way, demonstrate opinions, and say whatever it is you want to say. Yep, we’ve seen numerous attempts on behalf of the Trump Administration to muffle our voices and make these protests as difficult as can be. So to adapt to these conditions, we’ve decided as a group of individuals desiring change and equality for our black counterparts, that we are going to make these protests as safe as possible so long as we can continue.

This has given birth to a new social media movement, the movement of protesting safety. Along with simple infographics of ways to stay safe from tear gas, ways to protect others during a global pandemic, and how to avoid any violent confrontations with the police, comes with one killer tip: how to avoid being caught on a security camera.

What the protesters have come to realize is that anyone whose face can be identified with facial recognition systems on public streets, can be easily recognized and named, leading to their arrest. Police officers are accusing protesters of unlawful actions, including thievery, public indecency, and many other minor and major crimes. If any face can be recognized, there’s a high likelihood that in the protesting hotspots of our country, they will be arrested. Facial recognition is one of the most powerful surveillance tools ever invented, but a study by the federal government indicated that the technology is much less accurate and identifying women and minorities.

Let’s think about the logic here. If this technology inaccurately categorizes women and minorities, at an event like a protest, for example, we could see supercharged policing and false indictment that will disproportionately impact people of color. So, as a result, we’ve seen protesters covered in full-face masks, hoodies, beanies, and any sort of clothing or protection to cover the most recognizable features of their face. AI is essentially run on a series of probability math problems. This is most apparent in technologies like facial recognition. When an algorithm analyzes a face, it has a broad range of algorithms and parameters it must measure in order to accurately identify it.

As an ordinary person, living a normal, unburdened life, accuracy may not be a big deal. For example, say I’m at home, sitting on my newest, comfiest couch. I was scrolling on social media, and saw a viral celebrity look-alike app. I download it to have some fun, only to realize that I’m a young, female of color, and now I look like Justin Timberlake. However, this is a completely different matter when it’s about whether someone should be arrested. Black and brown people are more likely to be inaccurately identified, and unfairly targeted. So now, when the BIPOC community of America decide to take a stand against the innocent lives of their community that have been lost, at the protests for their own rights and equality, they’re being disproportionately arrested and falsely accused.

From The Washington Post, sourced from studies from MIT and the US National Institute of Standards and Technology, Asian and African American people were up to 100 times more likely to be misidentified than white men, depending on the particular algorithm and the specificity of the search. This once again brings us back to the crisis of the creator of artificial intelligence algorithms. The bias in the data management systems comes back entirely to the creator of the algorithm. If the person who coded the algorithm has some sort of inherent racism or a lack of information regarding the facial features of a non-eurocentric individual, we see an unintentional perpetuating cycle of inaccurate categorization and a failure to accept and accommodate other cultures in America. The faces of African-American women in particular were falsely identified primarily in the searches by police investigators.

Sen. Ron Wyden (D-Ore.) said the findings showed how “algorithms often carry all the biases and failures of human employees, but with even less judgment.” In a statement, he added, “Any company or government that deploys new technology has a responsibility to scrutinize their product for bias and discrimination at least as thoroughly as they’d look for bugs in the software.”

So, what exactly happens when these facial recognition systems track the wrong individual? When the police have linked the “correct” face to someone at the protest in the wrong place at the wrong time? They throw a tear gas grenade, or ram innocent bystanders with their batons. Hospitals are overloaded with COVID-19 patients due to worsening flare-ups of the virus in the US. Now, they’re being flooded with protest victims with burns, injuries, and more. The next social media movement is what to do and where to go if you’ve been injured in a BLM protest. Yet, there’s more to be reckoned with and more hurdles to jump in the AI algorithms used for patient identification and care in hospitals.

Check out HackGuild on all our platforms @hackguild to learn more about AI and its intersection with medicine.

--

--