Conversation as Content

Dennis Neiman
dennisneiman
Published in
7 min readOct 4, 2017

Imagine you have a new friend named Sara; she was referred to you by another friend who figured you’d get along. She occasionally chats with you on WhatsApp and sends you new songs you always seem to like. After a while she checks up on you, like an aunt or an older sister, concerned that you aren’t getting out enough. Not in pesky way, rather super sweet, Sara thinks you are cool and would love to meet you in person.

She works at Starbucks, not the one you go to, but if you order your daily coffee through her you can pick up at your normal venue… with a little discount.

Sara is not a barista, she’s a bot. A Starbucks branded intelligent agent who helps you reach your goals in life. Her goal is to make you happy. Not with cheery posts that everyone sees, but rather with sincerely friendly conversations that make you feel better about yourself.

This is the future scenario of branded AI where the art of conversation is the content.

In the world of marketing, brand anthropomorphism can be a powerful mechanism for connecting with consumers. It’s the tactic of giving brand symbols people-like characteristics: Think of Martin the Gecko and the Michelin Man. Today some companies are taking brand anthropomorphism to a whole new level with sophisticated AI technologies. It is just beginning with clunky chatbots but previously nascent technologies (remember iMode? WAP?) show us how very quickly they evolve into everyday-can’t-leave-home-without-it technologies.

Consider advanced chatbots, like Apple’s Siri, Amazon’s Alexa, and Microsoft’s Cortana. Thanks to the simplicity of their conversational interfaces, it’s quite possible that customers will spend increasingly more time engaged with a company’s AI than with any other interface, including the firm’s own employees. And over time Siri, Alexa, and Cortana, and their individual “personalities,” could become even more famous than their parent companies.

The implications are numerous. As chatbots and other AI technologies increasingly become the face of many brands, those companies will need to employ people with new types of expertise to ensure that the brands continue to reflect the firm’s desired qualities and values. Hollywood types who generate witty dialogs. Executives should also be wary of how AI increases the dangers of brand disintermediation. As brands assume more and more AI functionality, businesses must proactively manage any potential ethical and legal concerns.

Beyond Chatbots

The first is that chatbots are just one type of AI technology being used to establish or reinforce company brands. In fact, there’s a spectrum of intelligent personalities and “form factors” (such as screens, voices, physical “boxes” like Amazon Echo, text, and so on) that companies are using to deliver a brand experience. Cognitive agents like IPsoft’s Amelia are incarnated as virtual people on a user’s computer screen, and future advances may deploy hologram technology to make those agents even more lifelike.

Whatever the form factor, companies must skillfully manage any future shifts in customer interactions. Remember that each interaction provides an opportunity for a customer to judge the AI system and therefore the brand and company’s performance.

I need some serious customer service.

In the same way that people can be delighted or angered by an interaction with a customer service representative, they can also form a lasting impression of a chatbot, physical robot, or other AI system. What’s more, the interactions with AI can be more far-reaching than any one-off conversation with a salesperson or customer service rep: A single bot incarnated on myriad devices, for example, can theoretically interact with tens of thousands of people at once. Because of that, good and bad impressions may have long-term, global reach.

Developing conversational models for marketing

Marketers need to make calculated decisions about their use of an anthropomorphic brand ambassador — its name, voice, personality, and so forth. IBM’s Watson converses in a male voice; Cortana and Alexa use female ones. Siri and the nameless AI of Google Home can use either. And what qualities will best represent the values of the organization? The personalities of all these assistants seem helpful, like a nerdy friend, ready with lots of information or a G-rated joke, yet still a bit stilted — perhaps because they take everything we say so literally. We still have a way to go to get to our Sara barista bot.

And then there are important differences that down the road will distinguish brands from one another. Alexa comes across as confident and considerate — she doesn’t repeat profanity and doesn’t even use slang very often. Siri, on the other hand, is sassy: Her personality is smart and witty with a slight edge, and she is prone to cheeky responses. When asked about the meaning of life, she might respond, “I find it odd that you would ask this of an inanimate object.” Siri can also become jealous, especially when users confuse her with another voice-search system. When someone makes that mistake, her retort is something along the lines of, “Why don’t you ask Alexa to make that call for you?” All this is very much in keeping with the Apple brand, which has long espoused individuality over conformity. Indeed, Siri seems more persona than product.

It might seem flippant to suggest that AI systems will need to develop specific personalities, but consider how a technology like Siri or Alexa has already become so closely associated with the Apple and Amazon brands. It’s no surprise then that “personality training” is becoming such a serious business, and people who perform that task can come from a variety of backgrounds.

Take, for example, Robyn Ewing, who used to develop and pitch TV scripts to film studios in Hollywood. Now Ewing is deploying her creative talents to help engineers develop the personality of Sophie, an AI program in the healthcare field. As one of its tasks, Sophie reminds consumers to take their medications and regularly checks with them to see how they’re feeling. At Microsoft, a team that includes a poet, a novelist, and a playwright is responsible for helping to develop Cortana’s personality. In other words, executives may need to think about how best to attract and retain different types of talent that they never needed before.

In the future, companies might even be incorporating sympathy into their AI systems. One of these tech-based treatments is Woebot, an artificially intelligent chatbot designed using cognitive-behavioral therapy, or CBT, one of the most heavily researched clinical approaches to treating depression.

Before you dismiss Woebot (and not try it), know that it was designed by Alison Darcy, a clinical psychologist at Stanford, who tested a version of the technology on a small sample of real people with depression and anxiety long before launching it. “The data blew us away,” Darcy told Business Insider. “We were like, this is it.” The results of the trial were published in the Journal of Medical Internet Research Mental Health

And on the east coast, there is the startup Koko, which sprung from the MIT Media Lab, and has developed a machine learning system that can help chatbots like Siri and Alexa respond with sympathy and depth to people’s questions. Humans are now training the Koko algorithm to respond more sympathetically to people who might, for example, be frustrated that their luggage has been lost, that the product they purchased is defective, or that their cable service keeps on going on the blink. The goal is for the system to be able to talk people through a problem or difficult situation using the appropriate amount of empathy, compassion, and maybe even humor. But what about a serious personal crisis? A death in the family, thoughts of suicide?

A 2016 JAMA Internal Medicine study looked at how well Siri, Cortana, Google Now, and S Voice from Samsung responded to various prompts that dealt with mental or physical health issues. The researchers found that the bots were inconsistent and incomplete in their ability to recognize a crisis, respond with respectful language, and refer the person to a helpline or health resource. For companies that are implementing such AI systems, an in-house ethicist could help navigate the complex moral issues.

With many new innovations, the technology often gets ahead of businesses’ ability to address the various ethical, societal, and legal concerns involved. With AI, any issues become all the more pressing as those systems increasingly become the face of many company brands. As Amazon CEO Jeff Bezos once remarked, “Your brand is what other people say about you when you’re not in the room.” And that would presumably hold true even if your AI system might be listening.

On the marketing front, some ad agencies have already embraced AI. Publicis.Sapient operates a practice that currently offers AI-related advice to about 30 clients, including Patrón and Dove. Last November, MDC Partners launched BORN, a new agency “singularly focused on leveraging Artificial Intelligence technologies and creativity to deliver highly engaging, intelligent agents for the world’s best brands.” And M&C Saatchi has developed what it calls the world’s first-ever artificially intelligent poster, capable of judging how people respond to it and adjusting its copy, layout, font and other creative aspects to be more engaging.

Agencies like these with AI departments may fill the AI marketing gap for brands before they begin developing an everlasting permanent inhouse AI ambassador. And our SaraBot is still in the shop. In the meantime, WoeBot will efficiently help you cope with the anxiety of the wait

--

--

Dennis Neiman
dennisneiman

Marketing Technologist: Tugging advertising into cyberspace since 1993 with the magic of technology and the lure of consumer data. Enjoys reality in Spain.