Decolonising AI: Notes on Racialization

rahul bhattacharya
ETHIX
5 min readJul 3, 2023

--

This normalization of Whiteness in AI stems from the Eurocentric values that have shaped technology and society. It contributes to the myth that AI can be neutral or unbiased. Colour blindness is a myth that is prevalent in Silicon Valley and surrounding tech culture, where it serves to inhibit serious interrogation of racial framing. This myth of colour-blindness is also known as colour-blind racism, which is a form of racism that denies the existence of racism and its impact on people of colour.

The Context

Interrogating Whiteness in AI requires understanding how Eurocentric cultural values and ideologies have shaped technology. The relationship between race and technology has been deeply intertwined since at least early modernity. Technological superiority justified colonialism and imperialism, providing the rationale for white dominance over other races. As AI becomes increasingly impactful in domains such as healthcare, education, employment, and public services, it is essential to explore the complex relationship between race, racialization, and racial identities within these technologies.

The study of AI and race is still a nascent field, but it is crucial to address it as these technologies become increasingly impactful in areas like healthcare, education, employment, and public services. They risk harming marginalized groups by perpetuating discrimination if racial and gender, caste and sexuality biases are not accounted for. The patriarchal racialization of AI is embedded in the semantic imagination of the technology itself. AI is often depicted as White, reflecting and reinforcing the hierarchies that have justified Imperialism. This overrepresentation of Whiteness in robotics, bots and robot toys mirrors broader societal ideologies and perpetuates the norm of White superiority.

We must recognize that AI systems are racialized and often imbued with biases that reflect a White, Eurocentric cultural frame. The frequent portrayal of AI as Caucasian in media and popular culture is evidence of these prejudices. In an experiment conducted by researchers, participants were asked to define the race of robots with several options, including “does not apply.” A minority of participants chose the “does not apply” option, while a majority identified the robots as belonging to the race from which their colouration derived. The researchers concluded that “participants were able to easily and confidently identify the race of robots according to their racialization”

This normalization of Whiteness in AI stems from the Eurocentric values that have shaped technology and society. It contributes to the myth that AI can be neutral or unbiased. Colour blindness is a myth that is prevalent in Silicon Valley and surrounding tech culture, where it serves to inhibit serious interrogation of racial framing. This myth of colour-blindness is also known as colour-blind racism, which is a form of racism that denies the existence of racism and its impact on people of colour.

The myth of colour blindness is problematic because it erases people of colour from the white utopian imagery and influences people aspiring to enter the field of artificial intelligence as well as managers making hiring decisions. It can cause serious repercussions and must be treated like a cancer even in its slightest form, exposed and treated with heavy doses of education and painful dialogue.

All research is situated and shaped by the researcher’s assumptions, perspectives, and biases. The idea of a purely objective, value-neutral science is an illusion. Researchers should acknowledge and reflect on how their own backgrounds and identities shape their work.

The Ethics of Computational Social Science by David Leslie discusses the risks associated with assuming a scientistic “view from nowhere” and studying the objects of research solely through quantitative and computational lenses. Feminist and postcolonial scholars have critiqued the “view from nowhere” for more than 20 years, arguing that researchers should consider alternative perspectives and approaches to research that take into account social and ethical implications. Postcolonial, feminist, and indigenous theories provide frameworks for understanding how dominant groups universalize and normalize particular worldviews that benefit their interests. They recognise that knowledge is always partial, plural and shaped by the contexts and power dynamics in which it is produced.

Researchers are not competent to produce the “view from nowhere” that conventional philosophies of science have demanded, and should instead consider alternative perspectives and approaches to research that take into account social and ethical implications. Much of Western philosophy and Enlightenment thought is predicated on ideals of objectivity, universality and individualism that erase diverse cultural logics and experiences. Indigenous knowledge systems, on the other hand, often emphasize relationality, spirituality and reciprocity with the natural world. They represent “other ways of knowing” that should not be subjugated under the guise of progress or modernity. Feminist theories have argued that the lived experiences of marginalized groups can provide privileged insights into systems of power that dominant groups are often blind to. Black feminist scholars like Patricia Hill Collins have developed frameworks like the matrix of domination to demonstrate how intersecting systems of oppression shape knowledge in society.

Interrogating the racialization of AI underscores how race and power operate within technology, urging us to work towards AI systems that serve the needs of all communities. This vision requires acknowledging and mitigating bias, promoting inclusive development, and using AI to challenge existing inequalities — recognizing that the project of racial justice is an ongoing and collaborative effort.

Dismantling Whiteness in AI necessitates collective action to demand accountability and inclusive practices. AI technologies reflect the cultures in which they are built; with more diversity, they can promote equity and empower marginalized groups. Examining data and algorithms for biases is also critical. Only by acknowledging these prejudices and their societal effects can we harness AI to challenge existing power structures and build equitable systems.

--

--

rahul bhattacharya
ETHIX
Editor for

Integrated Design educator - Experience Designer - Art Historian. Interaction Design enthusiast : UX design mentor