Chatbots just do what they’re told, don’t they?

My AI Brand
4 min readSep 10, 2019

--

Image credit: ErikaWittlieb

More than half of employers don’t have a written policy on the ethical use of AI or bots, whilst AI chatbots and how they interact with customers play a growing role in shaping brand perceptions.

The increasing use of artificial intelligence in marketing automation and customer service means that more and more brands are beginning to rely on chatbots to handle incoming enquiries and customer support issues.

If you’ve implemented a new AI chatbot platform, then your brand’s chatbot can be made available to customers 24/7, respond instantly to queries and resolve up to 80 percent of questions without the need to involve a human customer service agent. However, customer service agents are generally bound by contracts, employee codes of conduct and operating procedures. Do the same rules apply for your chatbot?

On the face of it, ensuring that chatbots follow company procedures would seem to be much easier than enforcing such procedures with your staff. After all, chatbots only do what they’re told. However, it is precisely because no one expects a chatbot to misbehave that the ethical implications of customer service chatbots are often overlooked.

Developing and deployment of a chatbot platform, bot instances and the conversational data required to make them work, typically involves a client-side team and a chatbot solution vendor. When resources are stretched and timeframes tight, vendors can end up doing much of the work in isolation, necessarily making assumptions along the way in order to develop the final chatbot solution. Whilst the final output might be technically sound, with clear customer journeys and programmed to answer all likely customer queries, it won’t necessarily be compliant with existing company policies or ethics code.

In a recent survey by global customer experience and call centre technology vendor Genesys, more than half the employers admitted that their firms do not have a written policy on the ethical use of AI or bots. The study found that, whilst 28 percent of employers were concerned that their companies could face future liability due to an unforeseen use of AI, 54 percent were not worried that AI could be used unethically by their companies or by individuals employed by the company.

Whilst it is easy to spot some of the potential ethical and policy issues that need to be taken into account when deploying your brand’s chatbot, there are technical, procedural and human factors to consider: all of which, will prove to be more valuable in the long term if they are written down in a comprehensive set of guidelines.

A common failing is that companies do not always make it clear that customers are chatting with a bot and not a human. Customers normally figure this out sooner or later, but as conversational AI communications become more sophisticated it is going to become more difficult to tell a human interaction from a bot interaction. Telling the user that they’re talking to a bot at the outset, is just more honest.

Data privacy issues also come into play here and whereas data submission forms may make data collection policies, user data rights and legal compliance quite clear, this is not typically the case with many chatbots. What happens to the user’s conversation data at the end of the chat session? And are the data requests made by the chatbot compliant with your policies?

There are also much broader questions to consider that directly relate to your company’s brand, purpose and values. For example, as noted by Amir Shevat, former director of developer relations for Slack, companies also have to ask themselves the big questions. Has your chatbot been developed primarily to serve the company or to serve its customers? If you know which it is and can explain why, then you’ll be able to create ways that your bot can demonstrate this.

Unfortunately, even though your team and/or vendor may have worked diligently to fulfill the specified requirement for your new customer service chatbot, without guidelines, it may not be consistent with your brand’s ethics and policies. And it may be your customers who first alert you to the fact.

This feature was first published by Carrington Malin on Linkedin.

Carrington Malin is an entrepreneur, marketing professional and advisor who has worked across almost every sector of technology. He helps companies, startup ventures and public sector organisations develop marketing strategies, digital initiatives and leverage new marketing technologies. He also publishes a daily Asia AI News digest. You can connect with Carrington on Twitter @CarringtonMalin or via Linkedin https://lnkd.in/furZ3s9

--

--

My AI Brand

My AI Brand looks at the growing impact of AI-first brand communications on consumer behaviours, purchasing habits and sentiment.