‘A Strange Intruder’: The Truth About Artificial Intelligence Chatbots

Dr. Tanya Pobuda
11 min readDec 7, 2021

--

This story starts with disillusionment, and a heaping helping of despair about some very bad jobs I’ve had. To help work through this issue, I had been working on a PhD dissertation that was focused on the merger of artificial intelligence technologies (AI) and virtual reality (VR) in training.

Particularly, I was looking at these technologies as a means to train people to have empathy in the workplace to reduce bullying and harassment behaviours.

As part of this research, I set about getting certified on IBM Watson, specifically in the creation of natural language processing (NLP) chatbots. Leaning into the research and exploring this world still further, I wrote an interactive educational game about being a project manager in an organization undergoing massive structural changes (read: layoffs). And that game was based on a painful true story of my past career as a project manager and communications professional before returning to school (that is a story for another time).

I was incredibly curious about all of the processes and technologies that underpinned chatbots. What were the limits of AI natural language processing tools? These technologies drive the AI companions that touch our lives like Apple’s Siri, Amazon’s Alexa, Google’s AI Assistant. I wanted to know how it all worked. So what’s really happening in the AI black boxes that drive these helpful chatbots? This discussion, informed by my professional and educational experience toiling in the AI chatbot mines, will pull back the curtain on what’s really going on during the care and feeding of our increasingly pervasive AI companions.

A image of a person at a window with the text, “A Strange Intruder, the truth about AI chatbots.
Photo by Noah Silliman on Unsplash

Today, some affluent publics living North America encounter artificial intelligence (AI) natural language processing (NLP) chatbots on a weekly or daily basis. Encounters with these tools are becoming commonplace, as an increasing number of consumers integrate AI chatbots into their homes, and businesses launching chatbot-based services. It is estimated that the NLP market will be worth $28.6-billion (U.S) in 2026 (Research and Markets, 2019). In 2020, 6.7 million Canadians or 17.9 per cent of Canada’s population currently own a smart speaker in their homes (eMarketer, 2019).

In the case of AI natural language processing, we’ll discuss how exploitative work practices, and legions of faceless, often poorly-paid, gig economy human workers are, in fact, the ones behind the ‘magic’ of this emerging technology. In this, I rely on the Social Shaping of Technology or SST theory as articulated by Mackenzie and Wajcman (1999) and Baym (2015). This theory looks at how oppressive, exclusionary, hegemonic worldviews are encoded within these emerging technologies.

Tech critic and theorist Postman (1995) wrote a warning about technology informing the name of this talk: “What I am saying is that our enthusiasm for technology can turn into a form of idolatry and our belief in its beneficence can be a false absolute. The best way to view technology is as a strange intruder … its capacity for good or evil rests entirely on human awareness of what it does for us and to us.”

Recalling this admonishment of Postman (1995) to view emerging technology as a “strange intruder”, I will examine the “idolatry” that pervades many of the discussions of AI technologies today. There is a “puffery and propaganda” per Taylor (2018) surrounding contemporary discussion of AI, and we are going to peek behind the curtain this morning.

Now let’s look at the technologies in action. For example, I created ‘Simon’. I built him one evening as an experiment for a course on fundraising and grant writing. Simon’s an artificial client you can interview to learn some of the basics of working with a client and doing project discovery.

Here’s what chatbots can do — and really all they can do. They can greet you. They can tell you about their organization, in Simon’s case, a bit about his donor audience and fundraising objectives. And that is it. Well, they can also tell you some cringey dad jokes about fundraising. How does Simon do all of that, you might wonder? Well, I wrote every single word of it.

It took me sitting down and thinking about what Simon needed to achieve for professional communication students and how they might interact with Simon. IBM Watson helps with this process somewhat by offering up a few suggestions based on probabilities, statistical information around common misspellings of the words you are suggesting, you get some recommendations for synonyms, etc. But mostly, when you are building a chatbot as a programmer, all you have is an understanding of what you are trying to achieve, your anticipated customer/student/user questions, and your own creativity. It is a very human and very flawed process.

I also had to define the parameters of the “game” very strictly with students. I had to tell them this is what Simon can and can’t do. Because Simon is limited. So very limited. And this is important, so are other chatbots. They can only spit out what they’ve been given. Nothing more and nothing less. AI is looking for matches, that’s what it is good at. I’ll tell the system to recognize the words: Hello, Aloha, Shalom, Ahoy-hoy, Hey, Hi, Sup (I always write those in my greetings section) and having recognized those words, it will reward the user with a suitable greeting.

Creating these bots is all about creating ‘intents’ during the conversation, by thinking about what the users might ask or say, and then thinking about how to respond. Good copywriters will think of the many, many ways people might chat with chatbots. Then they’ll give their bots a personality with a number of varied, short, clear and even entertaining responses.

Now here’s a confession. When I ran a trial of Simon, it took a class full of very clever students a mere 10 minutes to ask Simon a question he couldn’t answer. I had to log into the back end of the chatbot system, find the question and write a variety of responses to fill that gap. And that’s the really big secret of AI natural language processing, that’s what is happening on a massive scale in the often overseas offices of Amazon and Google.

As you can see the key is writing, writing and more writing. Astra Taylor (2018) coined the term fauxtomation to describe this process. There is a constant need to recalibrate these tools because of the highly variable ways in which real people in the real world communicate. The use of vast legions of often poorly paid, gig-based human labour is an absolute imperative to make AI technology seem like magic. Taylor writes, “ Though automation is presented as a neutral process… one needn’t look that closely to see that is HARDLY the case.” Taylor argues, with the rise of these magical tools, indeed, there’s more (human) work to do than ever.

There’s a wild west in the AI industry, and fauxautomation is something they absolutely don’t want you to know about.

Let me say this: You will never need to write as much or as cleverly as you’ll need to do to write an effective chatbot. Even then there’s no assurance the users won’t break it in the first few minutes. That’s where human beings come in. Chances are the next time you correspond with a purported chatbot, you just might be talking to a person. And perhaps, even more certainly, you are corresponding to a minimum wage or very low paid person. As Solon (2018) pointed out in a piece in the Guardian it is often cheaper and easier to get humans to behave like robots.

Here’s a quote from an evergreen tweet by founder of readme, Gregory Koberger (2016):

How to start an AI startup

1. Hire a bunch of minimum wage humans to pretend to be AI pretending to be human

2. Wait for AI to be invented

There have been some high-profile examples of fauxtomation at work. You’ve likely heard of the controversy with Amazon Alexas being hacked by third-parties and families reporting strange voices talking to than, or these tools recording your conversations for Amazon researchers to use to “improve Alexa.” A San Jose, CA company Edison Software had their AI engineers look through 1,000s of unsuspecting users’ personal emails in an effort to better construct their software’s predictive responses for them. It isn’t ‘magic’ when you know everything about me and recommend AI ‘predictive text’ that you know I’d like. Calendar services like X.ai and Clara had workers toiling for 12 hours a day pretending to be bots to provide services for customers.

Make no mistake, everything you type into or speak aloud into most of these systems is viewable by AI engineers. The more you type and say to these systems, the more they can calibrate their responses. They will become richer and more sophisticated through 1,000s of person hours of writing, writing and more writing. Some commentators have called AI chatbots a ‘Wizard of Oz’ technology. As in, pay no attention to the man behind the curtain. Because make no mistake, there’s a person behind that curtain. Indeed, many, many people.

As social shaping of technology theory (SST) illuminates, technologies are created by people. People with motivations to privilege themselves; and exclude others. Oppressive, exclusionary worldviews are encoded within software research and development environments. There’s a model of exploitation in the core of these technologies. There’s exploitation going on in the labour practices in these tech organizations. The faux robots are helping buoy up AI startups to sell these ideas to the public and investors.

Bluntly, there’s money to be made in the fog of confusion and hype that swirls around AI currently. If they can convince us all to engage with, work with, use and invest in these technologies, they make money. They gain fame and power. The marketing of this technology contains within it these oppressive worldviews, and the ideology of technology as magic. The reality is that these technologies absolutely rely on the cheap labour of gig-economy employees, and the free labour of their users. The more you say, the more you interact, the more the people behind the curtain can improve upon the magic. You, as consumers, are part of an exploited class that is critical to the rise of automation.

I’m going to conclude with a story from 1770 and the invention of a robot who could play and win at the game of chess. It was called the Mechanical Turk or Automaton Chess Player created by Wolfgang von Kempelen, a Hungarian author and inventor. He made the “Turk” because he wanted, rather desperately, to impress Empress Maria Theresa of Austria.

The Turk played a mean game of chess. The Empress and others in power were extremely impressed. Fun fact: the Turk even played Benjamin Franklin.

An images of the Mechanical Turk with an image of a little man hiding in the “computer”.

It wasn’t until decades later that it was discovered there was a very human chessmaster hiding inside the Turk, literally pulling the strings.

This chess playing android was invented to curry favour from a monied class, won its inventor money, fame and a place in high society. The willing dupes of the public and the media of the day played along. The mysterious robot was welcomed by awe-struck crowds. Meanwhile there was a human being inside, sweating, toiling and bringing the magic to life.

Indeed, Amazon signals some self-awareness of this quirky bit of technical trickery in our distant past as Amazon has a service called Mechanical Turk or MTurk for short. It is a service for corporations and researchers to outsource work to gig workers. For example, I can test my chatbot with MTurk workers who are paid a penny for each interaction. And that’s typically the going rate, a U.S. penny. And if you aren’t American, you get paid in Amazon gift cards.

The Mechanical Turk is an apt analogy for our AI companions of today. The AI companions you invite into your homes, these strange intruders, assuredly, have real human beings inside, pulling the strings.

About the author

Tanya Pobuda is an interdisciplinary scholar, serious games designer, licensed drone pilot, AI chatbot specialist and VR/AR practitioner. Tanya is studying communication, representation in games, and simulation in higher education, training and communication at the X and York University Communication and Culture Doctoral Program. She holds a Master of Professional Communication (MPC) at Ryerson University and has a Bachelor of Journalism, High Honours from Carleton University.

Tanya has a certification in artificial intelligence chatbot design on the IBM Watson Natural Language Processing platform and is fully trained on Amazon Web Services Amazon Lex, Polly and Sumerian. She is a member and consultant for the Ryerson Library Collaboratory, is available to advise on natural language processing, augmented reality, game design, interactive storytelling, project management, Web design and communication projects.

Tanya has had a 27-year professional career in marketing and communications, Web design, and project management, beginning her career as Toronto-based technology journalist and news editor. Tanya is a Project Management Institute (PMI) certified Project Management Professional (PMP), with a specialization in large-scale national and international change management, research and development, and community social innovation projects.

Tanya is available to advise on natural language processing, augmented reality, game design, interactive storytelling, project management, Web design and communication projects.

References:

Cakebread, C. (Jan 7, 2019). Who Will Win the Smart Speaker War in Canada?, eMarketer, Retrieved from https://www.emarketer.com/content/who-will-win-the-smart-speaker-war-in-canada

Cheepen, C., & Monaghan, J. (1997). Designing naturalness in automated dialogues — some problems and solutions. In Proceedings First International Workshop on Human- Computer Conversation, Bellagio, Italy.

Colby, L., Women and Tech, QuickTake, Business Week, August 8, 2017, 9:57 AM EDT, Retrieved from https://www.bloomberg.com/quicktake/women-are-underrepresented-in-the-high-tech-industry-globally

Hango, Darcy. 2013. “Gender differences in science, technology, engineering, mathematics and computer science (STEM) programs at university” Insights on Canadian Society. December. Statistics Canada Catalogue no. 75–006-X.

Leeker, M. (2017): Intervening in Habits and Homophily: Make a Difference! An Interview with Wendy Hui Kyong Chun. In: Howard Caygill, Martina Leeker und Tobias Schulze (Hg.): Interventions in digital cultures. Technology, the political, methods. Lüneburg: meson press (Digital cultures series), 75–85. DOI: http://dx.doi.org/10.25969/mediarep/2092.

Mackenzie, D. & Wajcman, J. (1999). The Social Shaping of Technology. 2nd ed. Open University Press, ISBN 9780335199136.

McLuhan, M. (1951) The Mechanical Bride: Folklore of Industrial Man. 1951. Gingko Press.

McLuhan, M. and McLuhan, E. (1998), Laws of Media: The New Science. Toronto: University of Toronto Press.

Noble, S.U. (2018). Algorithms of Oppression: How Search Engines Reinforce Racism, New York University Press. ISBN: 9781479837243.

Postman, Neil (1993). Technopoly: The Surrender of Culture to Technology. New York: Vintage Books. ISBN 978–0–679–74540–2.

Postman, N. (1995). The End of Education Redefining the Value of School, New York: Knopf.

Research and Markets (2019). Global Natural Language Processing (NLP) Market Size & Share, Application Analysis, Regional Outlook, Growth Trends, Key Players, Competitive Strategies and Forecasts: 2018 to 2026, Retrieved from https://www.globenewswire.com/news-release/2019/03/29/1788623/0/en/Global-Natural-Language-Processing-NLP-Market-Size-Share-Application-Analysis-Regional-Outlook-Growth-Trends-Key-Players-Competitive-Strategies-and-Forecasts-2018-to-2026.html

Radeska, T. (Oct. 22, 2016). The Mechanical Turk: a chess-playing “android” from 1770, Vintage News, Retrieved from https://www.thevintagenews.com/2016/10/22/the-mechanical-turk-a-chess-playing-android-from-1770/

Solon, O. (July 6, 2018). The rise of ‘pseudo-AI’: how tech firms quietly use humans to do bots’ work, The Guardian, Retrieved from https://www.theguardian.com/technology/2018/jul/06/artificial-intelligence-ai-humans-bots-tech-companies

Taylor, A. (2018). The Automation Charade, Logic Magazine, Retrieved from https://logicmag.io/failure/the-automation-charade

--

--

Dr. Tanya Pobuda

Board game academic, licensed drone pilot, artificial intelligence chatbot creator, and virtual and augmented reality practitioner.