When Planned Out Doesn’t Pan Out

Elizabeth Albright
RTA902 (Social Media)
2 min readFeb 16, 2017

Back in March 2016, Microsoft unveiled their new AI chatbot, Tay. In what was perhaps an attempt to engage Generation Z, Tay was modelled after the speech patterns of a teenage girl. The idea was that Tay would engage with people though Twitter and begin to learn the ideas and vernacular that was trending at that moment. However, people are the worst.

In less than twenty four hours Tay began to repeat and tweet out the racist and sexist things that people were tweeting at her. Some of them included statements that the Holocaust was a conspiracy, that Zoe Quinn of Gamergate victimhood was a whore, and a rallying cry to build Trump’s wall.

Clearly this was not what Microsoft was hoping for. For such a large company, this social media stunt would have been carefully planned in advance. This fail was not because of some quick and thoughtless mistake. Unfortunately, they did not program Tay to filter out learning that kind of garbage. Perhaps the company had too much faith in humanity and expected more from the Twitter community.

It’s likely that most of the people who tried to prompt Tay to say hateful things were simply trolls (although that does not excuse it). However, we do know that a sizeable portion of the Internet says these kinds of racist, sexist, bigoted things in earnest. One of the great ironies of this social media fail is that a lot of the problems revolving around job insecurity and economic redundancy are because of AI and automation developments like Tay. It is to address these problems that people like Trump use racist rhetoric to put minorities and immigrants as scapegoats.

The lessons from this experiment show the caution needed going forward with AI. The intelligence of an AI is only as good as those who are teaching it. And if you’re going to use social media to try and accomplish that, you better be prepared for a garbage fire.

--

--