Hard questions about bot ethics

Amir Shevat
Slack Platform Blog

--

Bots are becoming a part of our life. I wake up in the morning and tell Alexa to play my Brazilian Samba, I let Amy set my meetings and I check stats and reports in Slack. Bot builders and users alike are starting to understand that bots are an integral part of our life. But what are the rules that govern these new tech friends?

Ownership

One big question that people should ask but often don’t ask is “does this bot serve me, or the service provider?” or in other words “does this bot have MY interests at heart, or someone else’s?”. Will the food ordering bot recommend the pricey/low-quality items or the best priced and quality food? Will the HR bot serve me or the company? Will the insurance bot try to facilitate me claiming the money or try to prevent it?

There are also IP issues here: who owns the materials/photos created by a bot that merges your photos into a collage? Who owns your shopping preferences?

Having a personal assistant-bot hints towards user ownership, while talking to a representative-bot hints towards service provider ownership. Users and services providers can’t always tell the difference, and more often just assume and do not think about this topic. Think about your Gmail or your photos on Facebook— who owns the data? Same question goes for our bots.

My take on ownership — I think there are cases in which user ownership makes sense, and others where it is clear that the service provider should claim ownership.

The key is to be very clear and transparent about who owns what and what are the terms of service the users are opting into.

Privacy

Regardless of ownership, there is the matter of privacy — can a bot share information with other bots or human overseers? Should information be anonymized? Do users have the right to be forgotten? Basically, is there a user-bot confidentiality agreement?

My take on privacy — I think that, unless stated otherwise, there is an implied confidentiality agreement in which the bot is mandated to keep your personal and private information confidential (Chris Messina pointed out to me some exceptions like law enforcement or threats of self-harm). Transparency is key here as well — when submitting a bot to Slack, we require developers to create a privacy policy and to make it publicly available.

In general bots builders should keep user information private as much as possible.

Use of data for advertisements

This is a subset of privacy and ownership, and a very important topic to discuss. Bot builders are still exploring ways to monetize bots so — can a bot serve you ads? Can the bot use data that you provided, either directly or through the API to optimize these ads for you?

My take on ads — A bot should not serve ads unless it has a strong, expressed purpose that benefits the user in doing so, and even then only on B2C platforms. I would hate to see bots becoming the new tracking pixel. Bots should not be prompting users to click on things and buy things unless explicitly asked to do so.

Ads in apps are against the Slack API terms of service, and that makes me rather proud.

Abuse and Empathy

This topic probably needs an article of its own, because of the conversational nature of bots, they are much more prone for abuse. In a bot builders’ gathering called Botness, most bot developers claimed that people try all kinds of abuse, from cursing the bot all the way to hitting on the bot.

This is a loaded topic and is actually bi-directional:

Should humans abuse bots?

Are bots just like any other objects? Are they the new “punching bag” of the modern world? Should humans curse and abuse a bot?

My take on a bot being abused— I think there is a subtle difference here between “can abuse” and “should abuse”. At least until AI develops personality and feelings, you can not really abuse a bot, the bot does not care and your curses will most likely be filtered alongside other gibberish you might type. I do think that as a society we should not abuse bots. I think that as humans, abusing bots will make us more prone to abusing other humans, and that is clearly bad.

Humans should treat services with empathy Losing empathy is generally a poor trend for humanity. Developers should ignore or have a polite canned response to any abusive language.

Should bots abuse humans?

Can bots spam or harass humans? Can a bot harm a human? Or even answer back? Should a bot curse back? Does software have a right to defend itself?

My take on abusive bots — I have already written about the fact that bots should not harm humans, this includes spam, harassment, and any other form of hurt. I think that until bots, through AI, become sentient there is no justification for bots to defend themselves from this type of abuse (not talking about security). Moreover, I think answering back is not the most effective way to make humans less abusive, just answering “I can not handle that request” or just ignoring human abuse might be more effective UX.

In general, I think empathy in conversation interfaces should be one of the pillars of bot design and a common best practice.

Gender and Diversity

Should the bot be a female bot or a male bot? Should we have racially diverse bots? Should we have religiously diverse bots?

My take on gender and diversity—I think developers should think about diversity very hard. Some bot developers think that bots should not have gender — while this might work in an English speaking countries, it does not work in many other languages. In many languages everything has a gender — you can not refer to an object or a person without notating a gender. So, while in English bots might be “it” — in most of the world it can not.

Since conversational UI implies a person on the other side, users may want to try to place the bot somewhere on the gender spectrum (as well as other diversity attributes).

So, what should developers do? I think that, when applicable, developers should provide the user the choice to pick the bot’s gender (and other diverse attributes). An example of that is x.ai with their Amy/Andrew bot configuration.

Human-Bot Bot-Human Impersonation

Am I talking to a bot or a human? Is this bot trying to act like a human? Should the user know/care about the fact they are talking to humans or software?

My take on impersonation — I think there are major use cases, from health to finance, where it is super important for the end users to know if they are talking to a human or a bot. Transparency will also contribute to trust, driving users to be more inclined to use bots. Additionally, a bot that tries to impersonate a human usually results in a bad user experience.

In general, I think transparency is the best practice, and humans should not (as general guidance) impersonate a bot and vice versa.

Transparency and empathy as a cure to all maladies

Most of these questions are not addressed today by the industry. This is not because of bad intent, it is because of lack of awareness. We are all still figuring this out as we go, learning from hundreds of developers about what works and what does not. With empathy and transparency developers can address these issues and provide users with a delightful and ethical experience.

Interesting articles and resources on this topic —

Amir Shevat is Head of Developer Relations in Slack.

--

--

Amir Shevat
Slack Platform Blog

Investor in early stage startups. Previously: Head of Product, Twitter Dev Platform, VP product at Twitch, Slack, Google, Microsoft. Author at O'Reilly.