How does your bot say ‘ketchup’?
On ethically designing language for conversational products
“Do you want ketchup, mustard, or none?”
Amidst all the tech announcements about artificial intelligence and virtual reality, it’s almost ironic that the most fundamental form of human communication is the tech du jour.
With the rise of messaging platforms, chat as a service, and the discipline of content strategy, language is shaping up to be one of the largest investments the tech industry will make in 2016. Likewise, a few folks have asked me my thoughts on best practices for bot design.
I didn’t know, so I explored by attempting to make a simple bot to help my family to order in delivery. This proved surprisingly difficult for me. Not because of the technology: as a designer-who-can-code-some, the technical ease of building a bot was exciting. No, the issue I faced wasn’t an argument in code but rather my childhood dinner table: whether to accept ‘catsup’ as a substitute for saying ‘ketchup’.
Ordinarily, this wouldn’t have struck me as a debate to be having — I and the majority of people I encounter only ever use ‘ketchup’— except, my Texan-born dad strongly adheres to ‘catsup’, and he was the intended audience. Living in California for the past 30-some years, he’s lost the twang and other instances that make his language ‘Texan’; saying catsup is a small way for him to hold onto his roots.
And really, who was my unthinking bot to enforce against that?
“We think you should be able to message businesses like you message a friend” — Mark Zuckerberg, F8 2016
If my bot were really a friend, or even just as polite/human as we’ve tried to make them, it wouldn’t correct catsup, or fail to recognize a word that meant something to his cultural upbringing. And so, I opened the rabbit hole: Should I cover for the catsup use case? What about the European use of ‘tomato sauce’? What other foods and concepts should I be accounting for? Am I over thinking?
And at the same time, I wondered: Are we collectively stressing over this enough?
In this podcast, Leslie Miley is similarly concerned with ketchup — specifically, the diverse perspectives behind where we choose to store it. According to him, chances are you keep your ketchup in the fridge if you’re white and northern, and in the cupboard if you’re not white and or from the south. In a simple way, the fact that each would approach a solution around ketchup differently illustrates the benefits of including diverse perspectives in decision-making.
Like where we store it, researchers believe what we call ketchup — and our language more generally — can affect the way we think.
“Patterns in language offer a window on a culture’s dispositions and priorities. For example, English sentence structures focus on agents, and in our criminal-justice system, justice has been done when we’ve found the transgressor and punished him or her accordingly (rather than finding the victims and restituting appropriately, an alternative approach to justice).”
If you subscribe to this thinking, languages and dialects are measures of diversity in perspective. Indeed, language has always been a proxy to accessibility and inclusion, including becoming an American citizen, excelling academically, applying to jobs, participating politically, and understanding young people — basically, the whole checklist of requirements to work in Silicon Valley.
Language is a tool for bias so strong, it could actually keep people out of our industry. That’s right: in the event that conversational products really take off, language could actually block linguistically diverse people from designing a technology chiefly concerned with language. And yet, language at the nuance of dialects is a kind of homogeneity our industry doesn’t broadly track. Given power of language and the explosion of conversational products, we should be considering linguistic diversity at a systemic level.
I’m looking at you, bot developers.
Currently, the global population speaks just over 7,000 languages and many more linguistic variations. English alone has dozens of different dialects, including Ebonics and Texan English. Considering this linguistic diversity is central to designing conversational products, because this product design is language design. And yet, aside from ‘Howdy’ being the name of a bot, I would venture to guess that the perspectives informing most of our conversational product scripts do not yet reflect the diversity in our world languages.
Maybe it’s the targeting and my own self-selection, but the majority of my products speak to me in standard English (Hello, how are you?), Western/American idioms (the ball is in your court), west coast slang (‘sup), or even gendered language (bro), to the point where bots’ use of language from other dialects (amigos) feels…a little misplaced, like when I began school in the south and students from the West Coast and Northeast seemed to adopt ‘y’all’ like buying a college sweatshirt.
Likewise, as a girl who champions other girls to Lean In, ordering around the many conversational assistants with female names feels self-betraying.
RECENTLY, I signed up for a “virtual inbox assistant” service. I gave the assistant, Julie, access to my email and…www.nytimes.com
Perhaps these linguistic choices don’t shatter your user experience, but you can see how they and worse could stir others or encourage socially exclusive behavior. As a rule, I try to be wary when ‘educated San Francisco tech worker in a happy situation’ feels so squarely like the target context of a design. And often, despite our best efforts and intentions, it is.
Much ado a-bot nothing?
When you’re building human products, words really matter. And this issue isn’t one of a few words — on a higher level, the visibility of our industry’s outputs means that the way our conversational products use and regard language could potentially alter the way the world does. For one, if Heinz had branded differently we could’ve all been saying ‘catsup’ instead.
Just as you’ve probably found yourself using the language your friends, coworkers, or brands use, humans form their linguistic rules based on exposure to other language. By this rationale, the way our conversational products introduce themselves, express praise, or otherwise incentivize certain language may actually affect the way users think and communicate with others.
Every time we talk with someone, we become involved in a collaborative endeavor in which meanings are negotiated and some common knowledge is mobilized…Even a simple and brief encounter — someone requesting directions from someone else on the street — involves a certain tacit agreement about what kind of event is taking place and how it is appropriate to behave…In almost every encounter, we do not only gain and give information; the joint experience shapes what each participant thinks and says in a dynamic, spiral process of mutually influenced change.
— Neil Mercer, Words and Minds: How We Use Language to Think Together
This logic also suggests tech companies could potentially be homogenizing themselves by adopting one another’s language through exposure. Without the data, however, we don’t actually don’t understand how linguistically narrow we could actually be or are becoming. What is clear though, is that tech industry has an imperative to handle language responsibly, both to help our users and ourselves.
Our systemic regard for language becomes increasingly important as global power over language increasingly consolidates in Silicon Valley’s general favor. While we invest more in conversational services and platforms, the majority of the world’s languages and dialects is predicted to disappear in the next 50 years, save for standard English, Spanish, and Mandarin — AKA some of the most widely spoken languages in our industry. Put differently, as our world communicates more, and more globally, many people will actually lose tools for communication — one every 14 days. With conversational commerce, Silicon Valley will be increasingly involved in this shift by the fact that we are programming more conversation. Early conversational commerce designers have an especially urgent duty to get our products’ language right, because that language will set the tone for subsequent conversation designers.
None of this is to say that conversational commerce is spurring some language monopoly, or even that such a thing is inherently bad. To many, the consolidation of linguistic power is just evolution at work; at best, having a shared global language is efficient and inclusive.
For others, however, reverence for language really matters, because their language represents who they are and what they value. There are many concepts and images that standard Spanish, Mandarin, and English cannot capture in the same way other languages and dialects of those languages can. When these languages and their variations are let to die or otherwise suppressed, some argue that their speakers’ knowledge, culture, and sense of identity goes with them — to the point where entire peoples go to war over preserving their languages, and a demographic of students could be falling behind in school because their dialect isn’t recognized. If diversity in Silicon Valley at present is any indication, the people who build communication platforms for these people will lack an understanding and appreciation of that culture and history.
To be clear, my point isn’t to take a side on the significance of certain languages. My concern is that our mostly unintelligent bots will, if we don’t seriously consider these issues. I’ve yet to form a strong opinion on conversations as a design pattern, but I am worried about what could happen if a narrow pocket of the world can dictate language by designing it for global audiences. I’m afraid that the way our unthinking bots speak and handle language might have some autonomy over which languages get to stay, simply because they have the presence and backing that other languages do not. If the initiatives around conversational commerce have merit, that is the sort of power our language-based interfaces could have.
“The novel challenge brought by bots is the fact that they can give the false impression that some piece of information, regardless of its accuracy, is highly popular and endorsed by many, exerting an influence against which we haven’t yet developed antibodies. Our vulnerability makes it possible for a bot to acquire significant influence, even unintentionally [Aiello et al. 2012].”
As we stand at the beginning of this road, our industry has an opportunity to ensure that the people writing the next global technology movement into history won’t once again lack a truly global, historical perspective. If you want to get better at designing bots, and products more generally: study programming languages, but also take time to study linguistics. Hire content strategists. If you’re a company powering a conversational platform, consider espousing the merits of studying diverse human languages rather than just tailoring your APIs for diverse programming languages.
We shouldn’t stop at covering bots against saying something obviously inappropriate: out of respect for those who speak and regard language differently than we do, we should be deeply testing and considering words we may otherwise find innocent. We should stress over what our conversational products call ketchup.
Because really, who are we to decide?
“The good news is that you don’t even have to be a great writer. You just need to pull your head out of your machine once in a while and remember that you’re creating something for weird, scared, vulnerable, sweet, frustrated, loving, complicated people. How you talk to them matters.”
Further reading & references
There are only a handful of companies doing big personality interface content well. So why is everyone so obsessed?medium.com
Tactical suggestions for doing empathy instead of just talking about it.medium.com