Does Conversational AI Need a Hippocratic Oath?

Vivek Nallur
Inscripta AI
Published in
6 min readAug 13, 2019

By: Team Inscripta

The word ‘AI’ conjures up a vision of a robot with a super-brain that is faster, and more capable than human beings. However, the reality is that the closest the average human being will come in contact with AI-based technologies on a daily basis, is smart assistants (Siri, Alexa, Cortana, Google Search, etc.) and chatbots. The one thing that these have in common, is the ability to hold a conversation. Computers find this an exceptionally hard task to master (and hence, the AI-moniker to these systems). In contrast, most children can articulate a new thought, understand conversational threads, and respond to emotions fluently by the age of five. Due to this near-universal ability among human beings, conversational interfaces are touted as the next big step in AI.

Many websites have added a “talk-to-us” option where customers (potential or actual) can have a chat with an agent. While sometimes it is obvious that the agent at the other end is a bot, sometimes the bot takes on the personality of a human, by introducing itself with a human name. These sorts of bots-with-personality are hugely in demand, with the potential market for customer-service estimated to be in the billions of dollars. From simpler tasks such as answering queries, and responding to complaints, to more sophisticated tasks such as suggesting products and services, chatbots are quickly becoming the primary interface between a company and its customers.

Chatbots and conversational agents are now ubiquitous

Ethical Problems

The currently accepted practice among developers who create a conversational agent (or chatbot) is to give it a human name, and some personality. The reasoning goes like this: humans don’t like talking to dull people, and therefore bots pretending to be humans must have personality too. There are plenty of guides on how-to and how much personality to give to these chatbots. However, some people are uneasy at the appropriation of human names and personalities and suggest that the AI announce itself upfront as a machine and therefore exhibit honest behaviour. They argue that giving names and personalities is the equivalent of lying through omission. While this seems like an easy problem to fix, the ethical problems (as always) are a bit more nuanced.

A bot may sound human, but it’s ultimately just a bot

The first big problem is the problem of anonymity. When humans talk to machines, they expect their conversations to be anonymous (since the bot is not a real person, it cannot personally identify them). This expectation is completely unfounded. A human being may understand that certain conversations are private and should be disregarded, however a bot will always keep a record of its conversation. If you have an argument with your spouse, and you have Alexa in the house, you can be sure that the argument has been recorded, even if you explicitly ask to delete it. All these conversations create an extremely personalized profile of the user. For instance, if you have a personal finance application or a fitness-tracking app or a mindfulness app that encourages you to share your stress, the amount of personal information available to the bot is quite detailed. Your implicit expectation of anonymity is quite misplaced; the company owning the bot could try to sell this information to other companies that may further approve/deny loan applications or sell you new products. While companies may insist that, for designing a better product, they need to record conversations, recent developments have shown that even the biggest companies which claim to take privacy seriously cannot be trusted to delete private data. A related problem is when system designers don’t really care about users. Technical decisions such as storing unencrypted credit-card numbers along with associated name and address information, or making personally identifying information freely/easily accessible to all staff, are decisions that are well-known to be bad. A chatbot may assure you that it is keeping your data safe and anonymized. But how many lay users have the ability to investigate/verify if the chatbot they are interacting with, actually keeps their data safe?

The second big problem is the problem of trust in optimality. Some bots may, either explicitly or implicitly, convey that they will help you make the optimal choice. This is particularly easy with elderly users, or when the number of choices can be overwhelming (for e.g., if you want to buy a camera for mainly outdoor photography but some indoor use as well, and low-light photography is not important, what aperture should you get? f/16, f/8 , f/2.8, f/2 f/1.4? What if the moisture-resistant feature is only available for f/2 and f/1.4? But the ISO speed of 6400 is only available for f/2.8?). Depending on the order in which the bot asks you questions, you could easily be led to favour one option over the other. How are you sure that this is optimal for you?

A third problem is the ethical limit of up-selling. From a legal perspective, and from a sustainable business point of view, a company is obliged to its shareholders, to maximize value. Given this situation, the chatbot may be explicitly designed to nudge the customer to a higher-margin product. This is called up-selling. While up-selling is not illegal, depending on the pressure exerted by the seller, the behaviour involved may or may not be acceptable. A persuasive salesman is quite reasonably doing his job. However, if the same salesman uses intimate knowledge of you or your family’s health to emotionally nudge you, that would be unethical. Recall that a chatbot has been designed to gather historical and social-network data, and use your personal profile to guide its behaviour. As you can see, the same behavioural tactic can move from being ethical to unethical, depending on the circumstances.

Yet another problem is the deliberate use of pretend personae. As mentioned earlier, chatbots nowadays tend to introduce themselves with a human name, and a certain persona. Now, it is well-known that human beings tend to respond consciously/unconsciously to race, religion, gender, etc. A chatbot that uses a particular persona to make a user feel safe and welcome might be okay, but chatbots might also deploy personae to deliberately encourage/discourage certain segments of users, based on their racial/religious/gender/political identification. This may also cross ethical lines of acceptable behaviour.

There are many such problems, and it is not always easy to figure out what the right thing to do is. When human beings themselves find it difficult to decide on the ethical course of action, it is not surprising that a chatbot finds it difficult, as well. A chatbot is ultimately a tool developed by a company. However, it is also pretending to be your friend. We believe that this friendship has to be earned, and you should be able to trust that the chatbot will do the right thing.

At Inscripta, we liaise with academics who investigate implementable machine ethics to try and establish what the best (and technologically feasible) thing to do is. For starters, we are working on a statement of principles that we try to rigorously adhere to. Our bots will never be designed to knowingly abuse a customer’s trust. Inscripta is also investigating how training methodology affects behaviour with regard to ethics. Any client that we consult with, is able to tap into our experience in building trustworthy chatbots. What do you think an AI company should do, to show that it is behaving ethically? Leave a comment on what you think are realistic (or feasible) steps.

--

--