‘Why are all the Twitterbots white?’ intra-action and structural racism on the web.

The sad reality is, the internet is plagued with structural racism. It is ‘whiteness’ and not voice that affords us recognition in the digital world; a space where an estimated half of all content is in English.
As Nakayama describes, “Whiteness is not static, but changes to secure its position of domination and it is important for scholars to pay attention to the role of social media in reproducing whiteness”.
But how do we ‘pay attention’ when the challenges of ‘Big Data’ and ‘sampling bias’ lead to inaccuracy of findings?
Whilst researchers often make the mistake of “applying machine learning and data mining algorithms without an understanding of the Twitter user population”, we can acknowledge that there is never really an unbiased way of interpreting a social phenomenon.
One might choose to ‘know’ Twitter by acknowledging the reality that the majority of the company’s employees are white men. This ‘knowledge’ becomes troublesome if we consider that the app’s audience does not fall along the same demographic lines as its creators (or at least this is the case in America where Pew Research reports a higher majority of African-American teens use Twitter than white teens). Who then is this platform being created for and what does this allow us to ‘know’ about it?
The answer to this question was semi-answered in the Twitter Bias project we started in ‘Social Research in the Digital World’, where we were instructed by Dr. Jenna Condie to create a gender-neutral persona that we could apply to two separate gendered profiles.
With the requirement to create a persona that could generate the most impact through likes, follows and retweets, we assigned our persona the following values; Feminism, Vegetarianism, Inner-Western Sydneyism, and unconsciously, ‘whiteness’.
Whilst we tried to be objective and separate ourselves from this persona, it was almost impossible, as Katelyn notes, because when we come to the internet, even as researchers, we bring with us our own predispositions; predispositions that can tell us a great deal about the way we, as digital natives, unconsciously reproduce social inequalities.
For example, why is it that we don’t have Twitterbots named Ahmed and Fatima, Hyun and Joo, or Kosta and Athena?
Is it because of the situatedness of our class, or is this unconscious ‘white-wash’ just another way that we can interpret, and ‘know’ the digital?
We didn’t stop to consider that we might be unconsciously reproducing a form of cultural imperialism, by saturating a space with ‘whiteness’ and subsequently eroding contradictory world-views that were fighting for a voice on the internet.
Perhaps we need to consider intentionally writing race OUT of our Twitterbots. Could we use unmarked caricatures instead?
Or perhaps we could have a ‘white’ male and a ‘non-white’ female to test the relative impact of intersectionality?
Jenna’s instruction that our International students should publish their blogs in both their native and second language was a great way of acknowledging one way we could be more than ‘white’ on the web.
To completely ignore the way that the reproduction of systemic racism on the web can be illustrated by our own fallibilities as researchers would be a wasted opportunity to address our own subjectivity and to ‘know’ something about the way we ‘intra-act’ with it.

