Is Artificial Intelligence safe for Black women?

Artificial intelligence (AI) is part of our day to day lives. Think about chatbots you interact with when getting support or information, the images you see on your Netflix homepage, Alexa, Siri or other smart assistants, smart cars — the list goes on. But AI isn’t always harmless, particularly when it intersects with gender-based violence, especially towards Black women and other marginalised communities.

Online abuse isn’t just the verbal abuse we often think of, like hate speech or harassment. Online abuse includes a spectrum of ways someone can experience harm via the online space. This includes images, videos and speech. AI can help filter these forms of abuse in some cases, but the technology through which abuse is facilitated is often developing as fast or faster than AI solutions, so abuse can go undetected.

There are lots of different examples of how AI can lead to harm. AI systems can be biased based on who builds them, how they’re developed, and how they’re ultimately used. Exploring these themes in relation to how Black women experience AI harm can help us better understand how these systems are harmful to everyone.

For example, algorithms within AI systems choose to present different information to users depending on a range of indicators and this can be really harmful. This can include algorithms pushing content from racist extremists as well as mis/disinformation about sexuality. Another example is what Safiya Umoja Noble calls “Algorithms of Oppression,” which highlights the way search engines like Google reinforce and codify racism. Her research explored the way Black women and girls are, or are not, included in particular search results. For example, “good hair” was found to present photos of white women whereas “bad hair” was found to bring up photos of afro hair.

Another way AI can lead to harm is where it makes decisions but there is a lack of transparency around why and how those decisions are made by the technology. AI is used by Uber to assign jobs to its drivers automatically, and Amazon is known to use AI monitoring systems to monitor staff in its warehouses. Both these uses of AI could lead to a worker losing their job or being prevented from accessing work, without visibility as to why because this information is not public and is not often, and sometimes cannot be, shared by companies.

Finally, there are a number of AI predictive and recognition systems that are being developed despite clear evidence and well-argued points around dangers related to such technologies. One example of these is facial recognition. It is already used by the police without informed consent, including police driving through UK cities with a device on a vehicle, scanning migrants’ faces at the border or scanning crowds of protestors.

Even if used in other ways, the work of Joy Buolamwini found that facial recognition technologies tend to be most accurate on white men and least accurate on Black women. This is because predominantly white male tech workers have developed this technology on predominantly white male data. This isn’t new, tech platforms and teams are now all too aware of the risks of lack of diversity and unrepresentative or bias data — or they should be.

Luckily there are amazing activists and organisations out there, working to keep on top of what’s happening in AI and how we can better protect our communities. For starters, they’ve successfully campaigned to get facial recognition in public places banned in the EU AI Act (as it currently stands).

There are also some really useful AI solutions being developed to mitigate harm and to build ‘AI for good’. For example, bots which provide safe and inclusive advice for sexual health to women and girls in situations where they are not able to access these another way. And there are inspiring organisations working with tech companies and legislators to make the development of AI safer, more inclusive and accountable to fundamental human rights.

At Glitch we’re starting our journey to understand what work we can do in AI. We want to explore where AI can help us do our work better and where we need to activate our advocacy to prevent and mitigate harm.

Step one has been to explore — where do Black women fit in? Quite quickly we found that very few technologies have been developed with Black women in mind and there’s a scarcity of research that explores AI from an intersectional perspective. However, there are a number of amazing Black women at the forefront of AI ethics work so we hope to build on their work and want to hear from you if you’ve got views about AI you want to share with us!

--

--

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store