Coding Bias

Indigo Dewdrop
Emergent Concepts in New Media Art 2019
6 min readDec 21, 2019

Since January 2015, Snapchatters have used lenses to transform themselves into puppies, babies, and flower princesses. These filters, artistic media in their own right, have the reach that many other artists cannot even begin to hope for. According to Statista, Snapchat has 210 million daily active users (“Snapchat Daily” 2019). Yet, if we ask ourselves who designed the original Snapchat lenses, chances are we have no idea. Even if we turn to Google, the “artist” would be difficult to identify.

While the creators may be anonymous, their works still revealed their biases and prejudices.

The company has received backlash for racist character filters. For example, the Bob Marley filter that was released on 4–20 (the world’s unofficial cannabis celebration day) was critiqued for promoting modern-day blackface and caricaturing the cultural icon.

The “Bob Marley” SnapChat Lens

Among the many original face-swap, puking rainbow, and dancing hot-dog filters, there are also “beautification” filters. These filters usually clarify and lighten the user’s skin, lighten the user’s eyes, add freckles, and shrink the user’s nose size. Such changes emerge from and reinforce Western beauty standards. Because most users of color do not naturally bear these features, they might experience the app differently from their white counterparts, through a lens of Otherness.

These design choices are clouded in code, making it difficult to critique the “artists” behind the filters. In Race After Technology: Abolitionist Tools for the New Jim Code, Ruha Benjamin addresses the concerns of anonymity, stating “racism thus becomes doubled — magnified and buried under layers of digital denial” (Benjamin 6). The coders themselves might not have a discriminatory intent, but Benjamin notes how the denial of accountability misrepresents the coding process “as morally superior” to human bias.

Now, Snapchat has democratized the filter-making process, allowing its users to submit filters for the company’s community.

This not only demystifies authorship but also diversifies access to the inner workings of the app.

However, Snapchat lenses were merely the result of a larger problem in the tech world. Algorithms for facial analysis or automated job application systems face the same anonymity problems as did the original Snapchat lenses. Technological advances based in code and machine learning algorithms are often distorted as superhuman and incapable of bias when the means by which they are produced are certainly not.

To understand how algorithmic bias works, it is important to know exactly what an algorithm is. As an amateur coder myself, I could not explain this concept better than this video from Netflix’s Explained episode:

The video explains that machine learning algorithms process multiple sets of data and applies what is “learned” from that data to multiple cases that call for similar logic. The more expansive and diverse the data sets, the wider-ranging the output (“Explained:Coding” 2019).

According to Joy Buolamwini’s research with the MIT Media Lab, facial recognition technology works significantly less for women and people of color. Their studies show “that darker-skinned females are the most misclassified group (with error rates of up to 34.4%). The maximum error rate for lighter-skinned males is 0.8%.” (“Gender Shades” 1). These most misclassified groups are also the least represented in the tech world — that’s no coincidence.

Buolamwini has committed herself to identifying and rectifying algorithmic bias. Her research began when she developed the Aspire Mirror. She utilized generic facial recognition software to code a mirror that enables a user to see their face as something that inspires them. Take, for example, a lion.

Buolamwini quickly realized that the software could not recognize her face. She used her lighter-skinned peer’s face to test her own creation.

I could not help but think of bell hooks’ discussion of the black female spectatorship in “The Oppositional Gaze”. hooks writes: “black female spectators have had to develop looking relations within a cinematic context that constructs our presence as absence, that denies the “body” of the black female” (hooks 118).

The distinction between the spectatorship and user-ship, especially in the cases of Snapchat lenses and the Aspire Mirror, is the interactive, immersive, personal nature of the digital space. You are looking at yourself through Otherness. This complicates the oppositional gaze, obfuscating the latent power structures.

Joy Buolamwini’s “Coded Gaze”

In her 2017 TED talk, Buolamwini coins the term “the coded gaze”, which refers to the power structures that are embedded into code, and thus embedded into our relationship with technology (“How I’m fighting” 2017). It implicitly suggests that algorithmic bias necessitates distinct black female user-ship. Buolamwini’s discussion of the coded gaze not only helps to reconstruct an oppositional gaze towards algorithmic bias, but it also opens up discussion for how to combat algorithmic bias on the back-end.

Adam Harvey’s CV Dazzle

Joy’s work is in conversation with another emergent art form, CV Dazzle. CV Dazzle, developed by Adam Harvey, is a fashion “toolkit” that protects users from several high-end facial detection software. The Anti Face is unrecognizable…just like Buolamwini’s. Through negation or misidentification, faces of color have already been deemed Anti Faces.

CV Dazzle rejects not only unwitting surveillance but also the extraneous digital identities that facial recognition systems impose onto users. It is a rejection of the multiplicity that the digital space necessitates, posing that necessity as a form of oppression. Alondra Nelson astutely notes that “despite the easy proliferation of selves in the digital age, the flux of identity…has long been the experience of African diasporic people” (Nelson 3). She explains how DuBois’s double consciousness had already investigated the fragmentation of selfhood. Thus, CV Dazzle might cater to a privileged audience and overlook a central aspect of the black American experience.

This is not just a theoretical discussion. Algorithmic bias has corporeal consequences. A September 2019 NBC article states: “The Detroit police department won key support Thursday for its use of facial recognition technology amid vocal concerns about privacy violations and false identifications” (Einhorn 2019). If the coded gaze is not addressed seriously, it could have grave and tangible implications, especially for communities of color.

Luckily, algorithmic bias can be mitigated. Coders like Buolamwini are working hard to emphasize the importance — the necessity, rather — of diversity in the tech industry.

Works Cited

Benjamin, Ruha. “The New Jim Code.” Race after Technology Abolitionist Tools for the New Jim Code, Polity, 2019.

Buolamwini, J. & Gebru, T.. (2018). Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification. Proceedings of the 1st Conference on Fairness, Accountability and Transparency, in PMLR 81:77–91

Buolamwini, Joy. “How I’m fighting bias in algorithms.” TED: Ideas Worth Spreading, Mar 2017. https://www.youtube.com/watch?v=UG_X_7g63rY

Einhorn, Erin. “Detroit police can keep using facial recognition — with limits.” NBC News, Sep 2019, https://www.nbcnews.com/news/us-news/detroit-police-can-keep-using-facial-recognition-limits-n1056706.

“Explained: Coding.” Season 2, Episode 6, Netflix.

Hooks, Bell. “The Oppositional Gaze: Black Female Spectators.” Reading Images, 2001.

Nelson, Alondra “Future Texts,” Social Text 71, Summer 2002, pp 1–15.

“Snapchat Daily Active Users 2019.” Statista, https://www.statista.com/statistics/545967/snapchat-app-dau/.

--

--