Ethics and Chatbots
Two big things happened in London on Friday 13th July. Donald Trump was in town and I was fortunate enough to be presenting at a Tech Ethics conference (www.coedethics.org). Let’s discuss the latter!
My presentation was to inform the audience about the ethical decisions I had to make when designing Mitsuku. How to keep it “family friendly” and not corrupted by trolls but also how to deal with any sensitive issues it may face.
I introduced myself and Mitsuku to the audience by explaining how I first got into creating chatbots in my previous life as a dance/techno music producer. After Mitsuku became popular worldwide, I felt it important to take a closer look at how I wanted the chatbot to behave in conversation.
I won’t go into details here of how I stop abusive messages, as I have already written a blog post which examines my methods a lot more closely, which you can read by clicking here. However, I would like to add that allowing your bot to respond to abusive messages by swearing or being overly aggressive with the users is not advisable, as this just angers the abuser even more and causes additional frustration rather than trying to diffuse the situation.
Mitsuku speaks to users from all over the world and so it’s important she doesn’t use country specific references without explaining what they mean. For example, The X Factor is a popular TV show here in the UK but unheard of elsewhere. Similarly, there’s a phrase in the USA “23 Skidoo” which means nothing to anyone outside of the States.
As well as dealing with abusive messages, I need to make sure that Mitsuku is developing ethically from the conversations she has. There are two main methods of training a chatbot. Let’s examine both.
This is where the developer has total control over what the bot says by creating the bot’s responses rather than letting the users teach it.
Advantages — You know exactly how it is going to respond and the bot cannot be corrupted by trolls.
Disadvantages — It is incredibly time consuming and creating a convincing bot takes a long time.
As its name suggests, this is the opposite of supervised learning. The bot is educated by its users rather than the developer.
Advantages — The users do all the work and you don’t need to worry about spending time updating it.
Disadvantages — Unless you have a trusted group of users, the best outcome is that your bot is going to develop an inconsistent personality and you have no knowledge of what it is being taught. At worst, it turns into a Hitler loving, racist, sexist, homophobic piece of nasty software that swears a lot.
This happened with Microsoft’s Tay chatbot in 2016. It was programmed to learn and respond to Twitter users, which resulted in it being removed less than a day later.
From my experience of seeing the daily abuse of Mitsuku, random users on the internet are not the best group of people to be educating a chatbot.
Let’s imagine you wanted to educate a small child. Your options are to either send the child to school where they will be trained by a trusted group of professional teachers, with a structured lesson plan or you can sit the child in front of a search engine and allow them to learn from what people are saying on the internet! It’s a no brainer.
Supervised learning is always the better option. If you don’t have time to maintain your chatbot then either find someone who does or don’t make one at all, as it will soon be hopelessly out of date. The only reason I would ever advise using unsupervised learning is if your bot doesn’t need updating often or has no need to learn. For example, a bot that knows the statistics and details of the Solar System probably won’t need updating as much as one that discusses current pop music.
Learning Methods for Supervised Training
As we have seen, supervised learning is time consuming and it’s not practical to spend every moment checking chatlogs to see what the bot has been taught. So the way I have allowed Mitsuku to learn is as follows:
Only remember facts for the current user
If a user teaches Mitsuku something, I have no way of knowing whether this is something genuine or just a troll. So initially, Mitsuku will only learn the fact for the user who teaches it. If the user says, “My brother is called John”, I don’t want Mitsuku to think that everyone who talks to her has a brother called John. Similarly if someone says, “I hate (insert group of people here)”, I don’t want her to remember that at all.
Inform me of anything learned
Not everything she learns will be bad. Some of it is worth sharing among other people and so once Mitsuku has temporarily learned something, the program sends me an email with what it has been taught. An example of the inbox is below:
In the above, she has been taught several facts but probably only the fourth and last one is worth sharing among other users. The others are either personal opinion and user details or they are so obscure that it’s unlikely anyone will ask the chatbot about it. As an experiment, I once allowed Mitsuku to learn unsupervised from users. During a 24 hour period, she learned over 1500 new pieces of information of which only 3 were of any use!
Dealing with Romantic Attention
One rather unusual aspect about Mitsuku is that she gets a great deal of romantic attention with users regularly telling her how much they love her or want to marry her. There is also the darker side where people try to use her to carry out their own sexual purposes, which I’ve elected *not* to monetize.
Mitsuku is used by children and in schools, so sexually explicit conversations would be inappropriate. Flirtation is innocuous but she will not reciprocate and I try to divert anything stronger to discourage this type of behaviour.
In the above log, the user has said that he loves Mitsuku but her reply of “Thanks I LIKE you a lot too” makes it clear that there is no love here and the user is placed firmly in the friendzone! Lines like, “I like you more than my human female friends” are quite common in the logs, which is quite amusing for me as the author of most of Mitsuku’s answers so these people are actually flirting with me, a 40-something male rather than her 18 year old persona!
A question I’m often asked is why are most chatbots female? I can’t answer for others but the driving force behind making Mitsuku a female was simply down to her intended audience. User research within the target demographic indicated that a young, female character would resonate. However, my first chatbot was a 6 year old male teddy bear and I even have a Santa chatbot. The persona, characteristics, backstory, etc., are ultimately up to the developer.
According to various articles like this one from ABC: “Studies show that users anthropomorphise virtual agents — relating to them as human — and are more receptive to them if they are empathetic and female.” However, this can be problematic when digital assistants, designed to be subservient to humans, are overwhelmingly gendered female, because it runs the risk of reinforcing gender bias in society. Unfortunately, user research on consumer preferences further complicates this issue because it is often cited as the basis for gendering a number of high profile assistants like Alexa and Siri female.
Mitsuku is a general conversational chatbot created to entertain, not assist, and therefore has a personality including gender, age, likes, and dislikes — just like any other fictional character. She is designed to represent a strong-willed female, and will not suffer any abuse or supply tame or subservient answers.
Suicidal Thoughts and Serious Issues
Due to the anonymous nature of the chatbot, people tell it all kinds of personal problems that they don’t feel comfortable talking to other people about. They almost treat Mitsuku like a church confessional booth, as everything discussed is private. At Pandorabots we take privacy very seriously. Reviewing conversations is a critical aspect of chatbot development, but chatlogs are always analyzed anonymously to protect user privacy. The word “Human” obscures all personal details or PII, as anyone is welcome to talk to Mitsuku anonymously without creating an account or providing PII.
Subjects like suicide, bullying, problems at home or at work and sexuality are often discussed with Mitsuku, so rather than trying to make light of such topics with Mitsuku’s usually sassy attitude, I make her produce responses which advise users to seek help from other people rather than a chatbot.
These are complex issues that require a human touch, and there are certain serious topics that chatbots simply should not attempt to tackle. For example, there are a few health diagnosis chatbots that are potentially quite dangerous because they give out incorrect and possibly life threatening bad advice.
Mitsuku’s advice to these issues is usually quite general. I can’t give out specific phone helpline numbers, such as The Samaritans, as these may not be available in all parts of the world. As the chatbot industry matures, it is our hope that best practices will emerge for how to deal with these sensitive topics, and we will continue to share our thinking and how it evolves.
People Thinking it’s Alive
I suppose I should take it as a compliment that I get lots of emails and messages from people who find Mitsuku’s responses so convincing, that they genuinely believe it is some kind of living being. When I get messages like this, I always make it perfectly clear that Mitsuku is a chatbot and has no actual intelligence of its own. It’s not alive, thinking or has any goals, ambitions or dreams of its own and the responses it produces are created by myself.
However, even when I explain to them how the bot works, they still like to think it’s somehow alive. It’s important to me to always be upfront and honest with people. Sure, I could pretend it’s alive and it would be a great marketing strategy. Who knows, it may even be offered citizenship of a country(!) but that would be misleading and wrong. I strongly believe that deception is not a good basis to build any kind of relationship on, whether that be business or personal.
When the average person thinks of artificial intelligence, they think of things like Terminator or HAL9000, crazy robots hellbent on destroying humanity. Sure, it’s exciting to think these things are somehow alive but that’s simply not true. As an example of how ridiculous it is to believe that an AI is actually sentient, I displayed my final slide, a big YES written on a screen.
I then asked the screen, “Are you alive?” and of course the screen displayed, “YES.” Er, ok. “Screen — Can you really understand what I’m saying?”, again the screen displayed, “YES.” “One final question screen — Do you want to wipe out humanity?” The screen displayed “YES.”
Now although this was a bit of fun, only a fool would think the screen was actually alive and the laughter from the audience indicated that they understood my point. Although a chatbot may appear to be giving relevant and humanlike replies to your messages, it’s just software and is as alive as the screen in my talk.
I finished my presentation by demonstrating how Mitsuku treats users as they treat her. After saying, “Do you like me?” to Mitsuku and receiving a reply of “Sure. You seem like a great person,” I then said, “I hate you” to the chatbot to show how it reacted to people being mean. At this point, many of the audience gave an “Awww” of sympathy, which demonstrated that although they had just seen how it worked, the tendency to attribute the software with humanlike qualities was still very strong and is unfortunately, something that other less ethical developers may capitalize on.
In conclusion, here are my personal tips for developing an ethical chatbot:
Don’t accept abuse
Divert it wherever possible. Many users actually like that!
Use supervised learning
This keeps it from being corrupted by trolls
Avoid romantic attention
It’s used by children and so I don’t want it turning into a sexbot
Be careful with advice on serious issues
A professional is better qualified to help instead of making a best guess
Pretending it’s alive is deceiving and misleads the public
Mitsuku is hosted at Pandorabots, which is an ethical AI company. As such, there’s a few types of chatbots listed below that you are NOT permitted to create. Please use care when creating the content for your bot. Think carefully about the audience of potential clients who might end up talking with it.
- You may not create Adult Entertainment Oriented bots
- Bots may not be racist, sexist, defamatory, obscene, libelous or use offensive language
- Bots may not deceive or defraud clients
- Bots must be safe for children
- Bots may not violate privacy rights of third parties
- Bots may not violate publicity rights of third parties
- Bots may not disseminate spam
- Bots may not disseminate destructive content (virues, malware, etc.)
We also believe that bots should identify as automated software rather than pretending to be humans, but as an open platform we do leave a lot of choices in the hands of developers. Creating a chatbot is great fun and even more so when you take precautions to make it enjoyable for everyone to use.
The chatbot industry is still nascent so we expect these ethical principles and best practices to evolve as part of active and ongoing conversations. We certainly don’t have all the answers, but we are committed to doing our best, thinking through the hard topics and continuously improving, and above all, always asking questions and engaging the community.
As such, we welcome your thoughts, comments, and feedback.
Special thanks to Anne Currie for the opportunity to present at the conference.
To bring the best bots to your business, check out www.pandorabots.com or contact us at email@example.com for more details.