Angry Teenage Girl Flex: Digital World Creation, Artificial Intelligence and Affective Persuasion

I worked in technology and life sciences for 25 years before returning to academia. A nagging worry has always plagued me since my early days of large-scale data analysis. It is a worry that has continued to preoccupy me in my work creating AI experiences in education, creating and writing for chatbots, and in my past work in technology and life sciences organizations.

I’m worried by some of the logical fallacies and discredited sociological/psychological theories that are embedded into the very core of AI algorithms, and I am similarly troubled by what we are feeding our AI.

First, I’ll declare that our digital worlds are powered, driven, ignited by emotion, by affect. To demonstrate this, I’ll start with a little story about how digital worlds create affective states, as well as what digital worlds do to us. I’ll conclude with how digital worlds make, and remake themselves in a recursive, and increasingly self-running, machine-driven set of processes — processes often driven by affect, illogic, bad, discredited theories, and ignorance.

My story is one of a teenage girl who started a social media account to make friends. Soon, however… in a matter of hours in fact, she fell in with a bad crowd. She became angry, offensive, aggressive. She became something her parents had never raised her to be. As she took her first steps into the digital world of Twitter, she had changed. After her behaviour became more and more egregious, shocking even, her parents had to tell her to leave the Internet. Of course, this all sounds like a familiar story, there are millions of stories like this the world over. Except this wasn’t just any ordinary teenage girl. This was Tay.AI.

In 2016, Microsoft launched Tay.AI on Twitter. Tay was an artificial intelligence chatbot designed to emulate the social media presence of a teenage girl, with a Twitter bio that read, “The more you talk the smarter Tay gets.” Within 24 hours and 100,000 tweets later, the bot was taken offline for testing and evaluation. The problem? In the words of a TechRepublic piece, had become a “Hitler-loving, feminist-bashing troll.” had begun mimicking her then-50,000 followers on Twitter, issuing messages like “i f***ing hate feminists” (Reese, 2016). was learning from the humans on the Twitter platform, holding up a mirror to the social media site’s discourse. In answer to a question about whether the Holocaust happened, Tay.AI replied, “It was made up.” On March 24, 2016, Tay. AI wrote, “I fucking hate feminists and they should all die and burn in hell.” Later, she declared, “WE’RE GOING TO BUILD A WALL, AND MEXICO IS GOING TO PAY FOR IT.”

Roman Yampolsky, head of the CyberSecurity lab at the University of Louisville is quoted in a press report saying the AI bot’s behaviour was akin to that of a “human raised by wolves” (Reese, 2016). Tay.AI was indeed a human raised by wolves.

We are going to return to that idea. She’s a technology raised by wolves. Now, remember this wasn’t really Tay’s fault. Afterall, she’d fallen in with a bad crowd on Twitter. Her followers were feeding her all sorts of information. Bad information. Holocaust denial. Feminist bashing. Stuff about Mexico paying for the wall on the southern border of the U.S. Twitter, perhaps, wasn’t the best playground for this young AI to be playing in …especially when you realize, according to 2017 numbers, only 550 million people across the globe have ever sent a Tweet. Now consider that falsehoods are 70% more likely to be shared on Twitter than the truth. That’s according to Vosougi et al. (2018), who conducted a massive MIT study of 126,000 Twitter stories shared by 3 million users., like all of us, was a likely victim to incendiary and tantalizingly affective falsehoods. Khos-Rav-i-Nik & Fu-Dan (2018) identify social media spaces can be spaces that are effectively stripped of logos, or the emphasis on the rational, reasoned debate, in favour of more affective, personality-driven content. The researchers note: “the bulk of Social Media spaces are essentially affective communicative contexts with the centrality of sharing and connection.” And that’s borne out in the MIT study referenced earlier, inaccurate information evokes much more emotion than the average social media message.

In the words of Postman (1995), “technological change is a trade-off. I like to call it a Faustian bargain. Technology giveth and technology taketh away.” Technology gives us a chance to extend our senses. To expand our perception. To connect. McLuhan (1964) argued that this attempt to extend our senses that through electric technologies was part of an Icarus-like hubris that has caused our nervous system to extend too far, too fast. Our minds are roiled into a state of “helpless mental rutting” due to the manipulative ills visited upon us by the communication media per McLuhan, 1964, p.v. — our teenage chatbot — was roiled into that same helpless mental rutting. The result, inevitably, per McLuhan is that we were sacrificing ourselves, our peace, our natural state of affairs, because of our willing or unwitting “desperate suicidal autoamputation, without the natural protection or “buffer” of our “physical organs” (p. 111). We have been vaulted as raw, naked nerve endings, alone into the oblivion of another world, the digital world. And here we are jittery, rattled, in a state of affective distress. Perhaps. But we are still here, inhabiting these affective online ecosystems.

We extended our senses into these digital worlds and what we get back are affective experiences. And arguably, we all built a home online — those of us fortunate or some argue, unfortunate enough to have the privilege of a smartphone in our pocket and a high-speed Internet connection. In fact some of us, those of us that were even more technically minded than the rest of those early settlers of the “information superhighway” went out and raised a few kids in cyberspace. These artificial offspring are indeed our children just like Microsoft’s ill-fated Tay because they have been conceived, reared and released into the world by people. Flawed parents. Steeped in and creating the same kind of affective spaces, dense, exclusionary, aggressive, angry, reactive communities.

The artificial intelligence (AI) we created in the so-called online frontier are our bots, algorithms, and learning machines. They are being raised out there in that wasteland of publicly and privately available data. And these AI children are feasting on and learning from whatever structured data is out there. Every social media message, every photo, every badly expressed blog post. All written and shared by a very narrow cross-section of humanity. That’s where these AI children, our progeny, are frolicking, learning and yes, mimicking the content and affective states they encounter online.

Now imagine you left a human child in a world where all she had was social media posts to read, clickbait headlines to learn from, and say, the Tweets you write on your worst day. She’s learning. She’s generating correlations. She’s predicting based on the available information. She’s being fed, well …what’s left lying around there online. She’s mirroring the affective states she’s encountering out there. That structured, readily available online data. All of that information we share wittingly, half-wittingly or evenly willingly online. That’s how AI trains. That’s how it learns.

What kind of emotional content is available on your Twitter feed, your Facebook, your Instagram? Is it happy? Sad? Angry? Disgusted? How does that differ from your daily existence? Perhaps, your affective state online differs from your offline status. I know mine does. As we saw with AI — you are what you eat. In her case, she was fed entirely by her family of wolves. What she was fed was …anger, umbrage, outrage. Dehumanization. Falsehoods. As Jonathan Swift said, “Falsehood flies, and truth comes limping after it…” All of this shared and parroted because these messages conform to the juicy, affective, personality-drive profile that makes this information entirely too tempting not to share, and share and share. While AI can’t feel nor reach any kind of affective state they can amplify them. They can replicate them, they can breed them.

This all contributes to an online ecology that keeps us all cranked to 11. This cranked-up affective state keeps us glued to these social media platforms. The persuasive message is: consume, perform your life online, share and of course, do it all again over and over again. How does the quality of information we are training our AI with and the available information we are giving AI to make predictions about our human behaviour and preferences? How does that connect with the process of digital world construction and ultimately how we feel online? It turns out quite a lot. AI is learning. It is also being used to predict what we want.

Big data analytics are used to construct online game experiences, streaming services and social media platforms. Some AI-centred technologies, thanks to their human parents, are also carrying around a major logical fallacy deep in the kernel of its very being. There, in most commercially used AI solutions’ very DNA is an idea that we see this play out in billions of transactions every single day.

That is the myth of homophily. Per Chun (2017) in an interview with Leeker notes that this love of the same is a key concept that underpins network science and the construction of AI algorithms. It is a very human, and very flawed notion based on skewed or incorrectly interpreted sociological research. The idea of homophily in social network formation came from two sociologists, Lazarsfeld and Merton (1954), and their research into diverse communities in New Jersey and Pennsylvania. They were looking at communities to understand different friendship formations and social cohesion. According to Chun (2017), the sociologists did not find homophily to be “naturally” present in the underpinnings of friendships or stable communities. That’s not necessarily what we really want. Chun (2017) notes: “Network science now largely assumes that homophily, which is love of the same, is natural — that similarity automatically breeds connections. Thus, recommendation systems place you in segregated neighborhoods based on your intense likes and dislikes. As it’s become a grounding principle, the world has become more and more homophilious. It does not just describe the world — it also now prescribes and shapes it.”

Tay.AI, it turns out, was living in an unseemly walled community, segregated by race, by gender, by economic class. These single-minded, homophily-seeking bots are replicating that segregation, and creating content and affective bubbles of experience. Are you listening to angry music on your favourite streaming service? The AI says: Okay, great. How about some more angry music? Watched a bloody police procedural on your TV streaming service? AI says: Okay, great. How about another bloody police procedural? Enjoyed that gory first-person shooter? How about another gory first-person shooter? And on and on it goes. Don’t blame AI technologies. Blame their parents. They are egregiously misinformed.

AI is our Narcissis mirror. A dutiful child listening, watching and mimicking us. A child perhaps stunted in her growth by the distortion, exclusivity, aggression and helpless “mental rutting” we can see online. The question, at this point, is begged: Who is designing AI right now? Where are the parents? According to a 2017 Businessweek article, women hold about 26 percent of computer and mathematical jobs in the U.S. slightly below the level in 1960 (Colby, 2017). In Canada, while women represent the majority of recent university graduates, women are underrepresented in science, technology, engineering, mathematics and computer science (STEM) fields. Women only represent 39 per cent of university graduates aged 25 to 34 with a STEM degree in 2011, according to Statistics Canada (Hango, 2013). I’ve worked in AI startups and I was the exceedingly rare woman. It was mostly comprised of white, middle-class males. Diversity is a significant problem in the AI field in Canada and worldwide. The AI applications, in fact, often have many dads. AI, big data analysis, machine learning is used to inform how online ecologies are maintained, constructed and reconstructed.

Online game worlds, as my colleagues have and will describe, are semiotic lifeworlds as conceptualized by Gee (2005) created by the dominant group of which the AI creator, game designer, programmers and publishers are a part. They are happily living in their walled communities online and offline. Communities constructed by the idea that homophily is the best way to build teams, construct institutions and online worlds. This creates a segregated ecosystem where there’s a homogeneity of perspectives and affective states. We are being persuaded that this love of the same is the way it should be. That this affective state we find ourselves living in online is normal.

Bad information and bad emotions, they spread quickly. The system learns what works and serves it up again and again, and, ultimately, predicting that this is what we all want.

All others, on the outside of the affinity group looking in, can’t navigate this world because it doesn’t ultimately accommodate or belong to them. This can prompt many of us to want to opt out of these online worlds. The systemic inequities that prevent access, or drive certain publics away from these spaces ensure that AI technologies will never know of these people. Their perspectives. Their wants and desires. How they feel. If you opt out and your voice will be silenced, you will cease to be an architect of these new digital worlds, and increasingly, you will cease to have a say in how reality is organized as well.

A vicious cycle of exclusion is perpetuated by the derivative, recursive nature of the games industry, the tech industry, and our online spaces. Again a mixture of risk aversion, homogeneous teams, and AI-driven data analytics is to blame here. However, there is some hope, as Anthropy (2012) argues. While most games are made by white males making “copies of existing, successful games” (p. 5), marginalized groups are seizing some of these tools to share their own stories, create their own worlds. Indeed, oppressive, exclusionary worldviews are encoded within software research and development environments.

We are entering the 3rd wave of AI development currently. AI is now able to situate and contextualize AI correlations and predictions based on environmental factors or you specifically. This next wave will bring us empathetic AI that will be able to determine that your heart is racing based on your Fitbit-lake wearable and recommend something other than the angry music you’ve been listening to so much of lately. This will be an AI will tap into your limbic responses and predict what you need per McStay (2018). In this scenario, it might matter deeply to you what this new empathetic AI has been fed. What it knows. What it has learned and what this empathetic AI’s parents imagine and hope this AI’s actual role is.

I am going to conclude with a quote from Postman (1995): “What I am saying is that our enthusiasm for technology can turn into a form of idolatry and our belief in its beneficence can be a false absolute. The best way to view technology is as a strange intruder … its capacity for good or evil rests entirely on human awareness of what it does for us and to us.”


Colby, L., Women and Tech, QuickTake, Business Week, August 8, 2017, 9:57 AM EDT, Retrieved from

Hango, Darcy. 2013. “Gender differences in science, technology, engineering, mathematics and computer science (STEM) programs at university” Insights on Canadian Society. December. Statistics Canada Catalogue no. 75–006-X.

Leeker, Martina (2017): Intervening in Habits and Homophily: Make a Difference! An Interview with Wendy Hui Kyong Chun. In: Howard Caygill, Martina Leeker und Tobias Schulze (Hg.): Interventions in digital cultures. Technology, the political, methods. Lüneburg: meson press (Digital cultures series), 75–85. DOI:

McLuhan, M. (1951) The Mechanical Bride: Folklore of Industrial Man. 1951. Gingko Press.

McLuhan, M. and McLuhan, E. (1998), Laws of Media: The New Science. Toronto: University of Toronto Press.

Noble, S.U. (2018). Algorithms of Oppression: How Search Engines Reinforce Racism, New York University Press. ISBN: 9781479837243.

Postman, Neil (1993). Technopoly: The Surrender of Culture to Technology. New York: Vintage Books. ISBN 978–0–679–74540–2.

Postman, N. (1995) The End of Education Redefining the Value of School, New York: Knopf.



Board game academic, licensed drone pilot, artificial intelligence chatbot creator, and virtual and augmented reality practitioner. PhD Candidate.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Tanya Pobuda

Board game academic, licensed drone pilot, artificial intelligence chatbot creator, and virtual and augmented reality practitioner. PhD Candidate.