Machines Playing Human Dress Up
The humanisation of technology
It’s a bit of a brain-twister when you think about it — humans making machines that are ‘human.’ It’s not hard to see, judging by the huge investment being pumped into humanising technology, that we humans place great importance on making technology behave like us.
But why? At least, that’s the question some may still ask. Others may also ask whether the goal of chatbot technology is to play ‘hush-hush’ and create the illusion that the user is actually communicating with a super-helpful human agent on the other end, rather than a virtual assistant with algorithm DNA. And if not — and the user is to be made aware that they are communicating with a chatbot — why then the need to replicate a human and not just leave the machine to behave like its own species — a machine. Why the narcissism?
Well, let’s start by backtracking to the 60s where it all began with Eliza — the world’s first virtual agent that mimicked human conversation by responding to user requests with matching scripted responses. This sprouted what’s known as the Eliza effect — the subconscious attachment of meaning and emotion to the messages we’re receiving from a machine. For example, assuming that when the chatbot says ‘thank you’, it is in fact grateful.
Human language invokes emotion. Even if it is a screen telling you that you look particularly radiant today, it still seems to curl the corner of our mouths upwards and makes us feel a little more radiant, doesn’t it? Customers love personalised experiences and chatbots can give them just that by interpreting their intent and delivering a tailored response.
In a time when cutting through digital clutter is tougher than slicing through a brick with a plastic spoon, reaching people on an emotional level is not only a plus in driving engagement and brand affiliation, it’s a necessity.
The ‘human’ nature of chatbot technology standardises communication across digital platforms. They bridge the digital divide by eliminating the need to become familiar with the way in which a website or digital interface communicates — how to navigate it and find the information you’re looking for. Rather than us bowing down to the dictation of how we should communicate with a digital platform — click here, filter there, etc. — chatbots empower the user to communicate with a machine on their own terms. By allowing users to engage with digital channels via the language they already understand — natural human language, in the form of chat — we open the flood gates of engagement opportunity.
This is particularly powerful when considering reaching audiences with low digital literacy, as in the case of frontier markets and elderly audiences. Consider for example, a banking chatbot targeting a frontier market, which allows users to manage their money as if they were sending an SMS — and with very little data spend. Or, one that enables an elderly man to ask it questions related to his arthritis, rather than him having to familiarise himself with searching online and exploring various daunting health-related websites.
Chatbots should not be used to mislead people into thinking that they’re communicating with a human. In fact, at Feersum Engine we believe that it is best practice to be completely transparent about the fact that the user is dealing with a chatbot. The aim rather, is to allow these virtual agents to be there 24/7 to support users quickly, easily and anytime on more simple queries that they can churn out at super-human speed. This includes tasks such as accepting and processing sales orders, answering customer service queries and processing insurance quotes and claims. When queries get more complex, that’s the chatbot’s cue to hand over to the relevant human.
o Trying to fool users into thinking that they’re already dealing with a human can result in frustration when the chatbot isn’t able to provide the right response and a lack of trust in the brand. According to HubSpot, 40% of consumers don’t care whether they’re dealing with a chatbot or a real human, as long as they are getting the help they need. If you’re making their life easier, there’s no need to keep the chatbot’s true identity under wraps.
Another interesting point to consider regarding the ‘human’ nature of chatbots is how we should interact and communicate with them. If we’re creating them to behave like humans, should we treat them as if they are?
Kate Darling, a leading expert in Robot Ethics at MIT, believes that how you treat a chatbot could say a lot about you. She says,
“We’ve actually done some research that shows that there is a relationship between people’s tendencies for empathy and the way that they’re willing to treat a robot.”
In addition to this, chatbots are designed to learn from our behaviour. Fling profanities, insults and inappropriate propositions at them and they’ll interpret that as the standard protocol of communication. In order to curb the verbal abuse tendencies of users, some chatbots either dish out the silent treatment and temporarily ban the user or ignore the abuse completely.
While deep down we may be aware that the chatbot isn’t going to take anything to its algorithmic heart, nurturing rudeness in mimicked human conversation can’t be positive for how we communicate in real-life human interactions.
We’re living in some iconic technological times, as the line between the great ‘machine vs. human’ divide is increasingly blurred to leverage the best of each one’s individual abilities.
If you have any questions on how chatbots can streamline processes, drive customer engagement or boost productivity and sales for your business, please reach out to us. One of our human team members would love to chat.