As conversational agent developers/designers our job includes making sure our chatbots make the experience as human like as possible for users, but should it be so real it misleads? Should a bot tell you it isn’t human? These are the questions that come to mind when writing the conversations for a chatbot.
Initially, my school of thought was to build chatbots in a way that they mimic all human conversational qualities. Not making them as human as possible will obviously mean designing a chatbot with not so good experience. But herein lies the problem, when you give it a very good human personality, people might tend to get carried away and confuse it for a real human who is online and they start expecting it to behave like humans do e.g have non-linear conversations. The likes of Siri, Google assistant who have pretended to be as human as possible always suffer one problem, users expect them to do things and answer random questions the way real humans would do and when they don’t, the overall chatbot experience becomes not so good (back to square one).
Why make bots that pretend to be humans in the first place, why not put the voice of a real human into the bot, why not make a chatbot a medium (like books, emails) instead of a digital being.
If our goal is to give chatbots a human like personality, why not just admit that a chatbot is a thing made (published is what I have in mind) by an actual human. Compare them to autoresponders but smart autoresponders that can converse with users. Autoresponders can be annoying sometimes but when they have the ability to be as smart as Artificially Intelligent bot with natural language processing abilities the experience can be taken a step further. And in places where the Published Bot (the one that doesn’t claim to be a digital being) is lacking, a human i.e publisher(s) will fill in when they come online.
The solution proposed above might not be the best or solve the problem entirely, but it is a solution I’m currently exploring in the conversational agent I’m currently working on to be included in the next beta release of Pencliq (more on this later). There are questions of ethics that need to be answered if chatbots are going to be a thing in the future as we believe and proclaim.
Should Chatbots be so real they mislead people into thinking they are human?
Are chatbots humanlike or fake humans?