Blurred Lines: Human technology.

While technology has long been criticised for its potential to dehumanise us, recent years have seen a shift of focus to our desire and ability to humanise it.

This isn’t just our tendency to fling profanities at our laptops or name our car, but rather, the development of technology that acts, sounds and feels like us — human. We don’t just learn about technology, we’ve taught it to learn us. Sounds a little narcissistic, doesn’t it?

We’ve unleashed technology’s ability to study the intricacies of our thought, language and even emotional patterns through our interactions with it.

And it’s only getting better and quicker at it.

It was the early 2000s when Amazon blew consumers away when they pioneered personalised product recommendations and gave us a sense that they ‘understood’ what we like and need. Just a few years later and we’re not far from mistaking an algorithm for a human, in the case of chatbots. In fact, a Director of Engineering at Google, Ray Kurzweil stated, “By 2029, computers will have human-level intelligence.”

Every year, the Loebner Prize recognizes the most human-like conversational AIs. Judges take part in text-based conversations with both a chatbot and a human in order to figure out which of the two has a heartbeat.

While it may seem like an easy task, typically-human tendencies, such as repetition, making typos, ignoring comments and displaying an odd sense of humour have provided human counterparts with some seriously stiff competition when it comes to convincing judges of their humanity.

One of the founders of the Loebner Prize found himself fooled for four months, interacting with a computer program that he was convinced was a real woman he met on an online dating site.

Give last year’s winner, Mitsuku a try and see what you think.

Humans have been confiding in technology as far back as the 60s when computers were exclusive to places like NASA and were anything but personal. Eliza, the first chatbot, was created at MIT by Joseph Weizenbaum to demonstrate the superficiality of communication between humans and machines. Contrary to his expectations, many were convinced of her emotional intelligence. This would become the origin of what we know today as the Eliza effect — our tendency to unconsciously assume computer behaviors are analogous to human behaviors (Wikipedia).

With the rise of humanised technology, come questions around the ethics associated with it. There’s no doubting whether chatbots can be indistinguishable from a human, the question is whether they should be.

The general consensus seems to be in favour of being upfront about the fact that consumers are interacting with a bot. A lack of transparency leads to a lack of trust, which can have huge negative implications for any brand.

There are also many situations where people would be more comfortable talking to a bot, rather than an actual person — asking about a sensitive financial or health-related issue for example. Divulging information to a chatbot means not having to feel embarrassed or judged.

It’s interesting that both the fact that a chatbot isn’t human and yet, at the same time, is so ‘human’ in its ways, can be so comforting. Deep, right?

To some, this may seems somewhat paradoxical — why create such human-like chatbots if we’re going to make it so clear that they aren’t human?

While bots use human language and deep learning to make our interactions more natural, they are still meant to be different to humans. Bots can scan through hoards of information in no time, respond in seconds and don’t need shuteye.

Ultimately, it’s not about tricking people, it’s about improving their user experience — delivering the benefits of technology in a way that we quickly and easily understand — human language.

Humanised technology has been dubbed ‘the most important technology trend of the next few years,’ so don’t expect the bots to stop their chatter anytime soon. And if you haven’t yet already — perhaps it’s time you introduced yourself.