Unpacking ‘Algorithmic Violence’

Rachel Gichinga
Intelligent Cities
Published in
3 min readMar 8, 2018

Mimi Onuoha introduces us to the concept of algorithmic violence, which she defines as “the violence that an algorithm or automated decision-making system inflicts by preventing people from meeting their basic needs.” She goes on to say the following:

“[Acts of algorithmic violence]not only affect the ways and degrees to which people are able to live their everyday lives, but in the words of Mary K. Anglin, they “impose categories of difference that legitimate hierarchy and inequality.” Like structural violence, they are procedural in nature, and therefore difficult to see and trace. But more chillingly, they are abstracted from the humans and needs that created them (and there are always humans and needs behind the algorithms that we encounter everyday). Thus, they occupy their own sort of authority, one that seems rooted in rationality, facts, and data, even as they obscure all of these things.”

Algorithmic violence takes on many forms, such as the 2012 case of Target figuring out that a teenage girl was pregnant and sending coupons for baby items to her home before she had informed her father. An action like that could have potentially placed the mother-to-be in danger if she had been living in a precarious home environment, such as with an abusive parent. The same applies for transgender individuals who may not have made their identities public. The list of potential violations is endless, and will likely go on to include cases like the one of this week’s Park Slope car accident, where the mothers of toddlers who were killed will have to return to weeks or months of targeted ads showing them toys and baby clothes.

Additionally, it is critical to assess the concept of imposing categories of difference, especially when the human dimension is removed and thus cannot intervene in cases where algorithms may generate racialized or gendered, or otherwise ‘othered’ representations of what the data claims is rational and objective. Take, for example, these two seemingly innocuous Google searches, and the results they produce.

The interpretation of “appropriate” is almost 100% the straight hair of white women, even when in non-traditional colors like purple, while “inappropriate” is any style that a black woman emerges with. These algorithms encourage discrimination and other forms of structural violence against a particular community.

Going forward, it is incumbent on technologists to think about how machine learning will have to evolve in order to imbue algorithms with more humanity. A more diverse and ethically-driven team of coders is a start, but it is not enough. Until we figure out the solution, perhaps we should all just follow Mark Zuckerberg’s lead and keep as much data as possible from getting into the hands of online marketers.

Source: https://www.nytimes.com/2016/06/23/technology/personaltech/mark-zuckerberg-covers-his-laptop-camera-you-should-consider-it-too.html

--

--