What are some ethical considerations that chatbots raise, and how should they be addressed?

Joe Amditis
NJ Mobile News Lab
Published in
3 min readMar 22, 2017

--

Chatbots are an exciting new frontier in machine learning, user engagement, and content distribution. As with any new technology, however, there are ethical challenges and implications that must be considered to ensure that said technology is implemented in a responsible and impactful way.

This is especially true when it comes to technologies that involve a “set it and forget it” approach, as most chatbots do. Advances in machine learning, specifically as it relates to bots, can complicate matters even further and lead to even greater consequences if the companies that employ them are not careful.

Tay ruins Microsoft’s day

One example of the dangers of free-form machine learning is Microsoft’s chatbot, nicknamed “Tay.” Tay was a Twitter bot that was supposed to interact with users on Twitter and learn from those interactions. Although the initial conversations were positive and friendly, the trolls quickly descended on Tay and attempted to manipulate her algorithm by attacking her and using language filled with racism, misogyny, and other offensive content to see if she would imitate them.

She did.

It’s also important to consider the effects of delivering information to the public via a bot, which necessarily has a limited range of responses. Those responses are also pre-programmed by individuals who have biases and tendencies of their own, which could lead to additional concerns about impartiality, fairness, and manipulation if the call and response databases are not closely monitored.

Finally, Amir Shevat of TechCrunch reminds us that it’s important to ask the question, “Does this bot serve me, or the service provider?” For example, Shevat continues, “will the food-ordering bot recommend the pricey/low-quality items or the best-priced and quality food?” Does the limited nature of the bot’s responses lead to a reduction in the nuance and sensitivity contained in each response? It’s also important to consider where the bot sources it’s information from and how it makes sure that those sources are themselves free of their own undue bias or corruption.

These questions, and many more, are perfect examples of why it is important to maintain diverse human oversight and supervision of bots and their library of inputs and outputs.

Privacy, identity, and other ethical concerns

Privacy is another area of concern when it comes to the use of bots. It’s important to maintain the security of the bot’s input and output databases in order to avoid the loss of sensitive corporate information or private user information. Users need to know that the questions they ask and the interactions they have with your bots will remain private and secure.

Chatbot responses, and all other communications, should also include some level of empathy and sensitivity when it comes to interacting with users. Shevat even questions whether or not humans should be allowed to abuse bots, as well as whether or not bots should be able to abuse humans.

Gender and identity are two additional and important concerns for chatbot owners and operators. Should your bot be male, female, gender neutral, or perhaps entirely genderless? Genderless bots might be easier to imagine for English-speaking bots, but what about bots that speak Spanish?

What about your bot’s race? Should your bot have an identifiable race, ethnicity, or nationality? Is it possible to create a bot that is devoid of national, ethnic, or racial identity without inevitably reinforcing the dominant narratives about race and ethnicity that are already at play in the country or area where your users live?

These are important questions for companies need to answer before incorporating chatbots into their day-to-day operations and user interactions.

Joe Amditis is the associate director of the Center for Cooperative Media. Contact him at amditisj@mail.montclair.edu.

About the Center for Cooperative Media: The Center is a grant-funded program of the School of Communication and Media at Montclair State University. The Center is supported with funding from the John S. and James L. Knight Foundation, the Geraldine R. Dodge Foundation and Democracy Fund. Its mission is to grow and strengthen local journalism, and in doing so serve New Jersey residents. For more information, visit CenterforCooperativeMedia.org.

--

--

Joe Amditis
NJ Mobile News Lab

Associate director of operations, Center for Cooperative Media; host + producer, WTF Just Happened Today podcast.