Say Anything? Behind the Scenes of Suggested Responses
Find out how product teams can create ready-made conversation replies through content design principles. (Co-authored by Talia Ledner.)
You’ve probably seen or used a suggested response, where a product suggests the next word or phrase you might want to type. For example, a typical set of suggested responses to an email looks something like this:
Even if you’re a designer who uses these features, you may not realize how much thinking goes on behind the scenes to create them.
In recent years, we’ve worked on a number of suggested response features across the Facebook family of apps and services. And while some suggestion features aim to help a person type more effectively or efficiently, such as spell check or autocorrect, ours aim to inspire or continue a conversation.
A suggested response carries a lot of weight — it’s what the Facebook company predicts someone may want to say, which consequently could shape the dialogue that people have across our platforms. That’s a big responsibility, and one we take to heart, given our mission to bring people and communities closer together.
Behind every suggested word, phrase, or visual is a team of people who strive to ensure that suggestions are helpful, appropriate, and contextually relevant for the content, using a combination of human filtering and machine learning technology to do so. This team includes folks in various roles, including Product Designers, Engineers, Project Managers, User Experience Researchers, Content Policy Specialists, and us — Content Designers. Content Designers at Facebook craft the communication that appears throughout our products. We all approach writing with the Facebook voice in mind: simple, straightforward and human.
Here, we provide an overview of how we create suggested responses, along with tips for content designers who work on similar projects.
What role does content design play?
Although we have hundreds of responses that may be appropriate for users to seek out, suggesting content implies endorsement by Facebook. That’s why we need clear guardrails for the responses that we suggest.
And key to that work are Content Designers who craft the guidelines around what is OK and not OK to suggest. The most significant impact Content Designers make on a machine learning-driven product is shaping that guidance.
In a similar way to how Facebook applies standards to help keep people safe from harmful content, and technology-driven filters to detect any inappropriate content surfaced in ads or search results, we Content Designers offer an additional layer of review over what types of suggestions should be provided by a Facebook app or service to help ensure conformance with our highest standard of integrity.
With input from multiple teams, roles, and Content Designers across the family of apps and services, we prioritize building guidance for topics that could be harmful or offensive. The goal? To suggest content that is relevant and helpful by being mindful of what people actually want to say and understand.
How does Facebook create suggested response content?
Here’s a brief overview of how we develop these features and how practitioners can apply content design to ensure a better experience for people using their products.
Step 1: Pull aggregate comment data
How does it all start? A team of engineers extracts a massive data set of anonymous, aggregated comments that are most commonly posted. This sets the stage for the machine learning model, an algorithmic model that relies on patterns to trigger the most relevant comment suggestions for a given piece of content. For example, when considering suggested responses to a post of an animal, we can input data that basically tells the model, “Posts for animals should trigger suggestions like ‘Cute!’”
We also focus on removing phrases we should never suggest, such as “bblegghhh,” “i hate u,” or other unclear or inappropriate content.
Step 2: Filter the comment data
Using a set of defined guidance, Content Designers then remove other suggestions that break our guidelines.
We base our decisions on a playbook of dos and don’ts we’ve crafted for our products, which itself is derived from our community standards and other content standards. This playbook might make it OK, for instance, to suggest some established acronyms, such as “LOL,” but not less understood or inappropriate ones, such as “IDGAF.”
Step 3: Categorize the comments
Content Designers, in collaboration with Engineers, then develop a classification system for types of comments we could suggest for a type of post. This helps train the machine learning model to recognize similar content that should trigger similar suggestions. So we might group responses such as “Wow!” “Cool” and “Love it” together and categorize them as appropriate to suggest on most posts or stories that the machine learning model interprets as “vacation-themed.”
Step 4: Localize as needed
Next, we ensure that suggested responses are appropriate and understandable to the person receiving the suggestions, according to where they are in the world and what language they speak.
This involves working closely with our internationalization team to share our guidelines for the localization process. For example, an innocuous emoji hand gesture in some nations might be offensive in others, so we work with language managers representing different locales to ensure we vet suggestions with a culturally sensitive lens.
Step 5: Keep the machine learning model learning
An algorithm is always learning and adjusting based on the latest data inputs, so it’s part of our role on the product team to keep the model updated. Our teams monitor user reports, employee feedback and current events, and ensure our model is taking into consideration necessary updates, including company-wide blocklists of controversial or offensive content.
For example, if an emoji becomes a controversial symbol, we might reconsider its inclusion in a suggestion set. Or, if a number of users send reports about a suggested response being a poor match for the post or story content, we’ll adjust the machine learning model accordingly, so it is less likely to match that response with a certain kind of post.
On crafting guidance
Here are some of the factors we Content Designers consider while crafting guidance for suggested user responses.
We consider two main audiences while crafting guidance:
- The commenter is the person to whom we’re suggesting the content. Before a commenter selects a suggestion, it embodies the product’s voice. After a commenter selects it, it represents the commenter’s voice.
- The recipient is the person who receives the suggestion once it’s selected by the commenter. After the commenter sends a suggested response, the recipient can’t differentiate between what was a commenter’s custom response versus what the product suggested they say.
Art vs. text
Art expressions, such as an illustration of a cartoon animal or person, are obviously not created by the commenter, so they hold a lighter weight than a text response or an emoji response.
In some cases, an illustrated or photographic version of a text response may be more appropriate to suggest than the text-only version. For example, it might feel more appropriate to suggest “I love you” as a sticker or as a caption on a GIF than “I love you” as text.
Slang and informal language
Slang can be tricky, alienating or potentially offensive to recipients, and can be challenging to get right for all locales and demographics. We avoid misspellings (for example, “woah!”) even if commonly used, but allow for acronyms and phrases that have been well-adopted into languages across the internet, such as, “LOL.”
Suggestions that assume race, ethnicity, color, national origin, religion, age, sex, sexual orientation, gender identity, family status, disability, medical or genetic condition are likely to misaddress, offend, or discriminate, so we generally avoid any suggestion that requires these assumptions to be correct in order to be relevant.
Inappropriate language, profanity, or vulgarity
Would certain imagery or wording feel inappropriate, creepy, odd, or otherwise risky? If so, we generally recommend that when in doubt, take it out.
It’s important that suggestion sets don’t alienate people. To prevent that, we consider many different facets of diversity, including asking ourselves questions such as:
- Are suggestions biased toward stereotypes of gender or race?
- Do they communicate a Western bias?
- Are we suggesting references that only younger generations would be likely to understand?
- Would people with lower digital and reading literacy be able to understand the context?
- Are we suggesting instances of digital blackface?
Are we assuming or promoting negative dialogue using suggested responses? People may interpret illustrations of negative emotions as more appropriate than text that suggests, for example, sadness or surprise. Facebook suggesting you say, “What on earth?!” could feel quite alarming; suggesting you use the 😯 emoji might feel less so.
We’re mindful of timeliness and of how repeated use could feel dated on the platform over time, and strive to consider which locales get these suggestions to ensure they’re relevant and understandable.
The process of crafting guidance is never fully done, since human expression evolves every day, and so too, must our suggested response content. For instance, an email platform’s auto-fill feature recently suggested the phrase “climate change” be used, but it’s increasingly accepted (and accurate) to say “climate crisis” instead. And phrases such as “Hey guys” or “That’s crazy” that could have felt innocuous years ago may no longer be colloquial or culturally appropriate today.
Because technology is fallible, we must also acknowledge that we won’t always get it right. Some platforms may still suggest antiquated phrases, and the most common phrases can still feel strange or funny when they’re suggested by Facebook. (You can see a lot of those examples in this Facebook group of people who comment only using suggested responses.) And even a well-vetted set of suggested responses could become inappropriate if they’re suggested in the wrong context.
The way a conversation can be completely reframed or redirected because of a single word choice is powerful — and a part of why thinking like a Content Designer is such a fascinating challenge.
We often consider Content Designers to be focused on the wording of a button, the user journey through a set of screens, the information architecture of an app, or the taxonomy of navigation labels, but at its core, content design is concerned with any form of communication that comes from our products.
Whether we write to explain something, to guide someone somewhere, or to emulate a response that a person might want to say, all words have to be chosen with care.
Tips for working on suggested response features
We’ve presented some factors we consider while crafting guidance for suggested user responses. But you might also benefit from our tips on how to approach that work.
Picture everyday scenarios
Once we’ve got a handle on the integrity and appropriateness of responses, we also have to think about variety. Most suggestions are used in everyday conversations, so to keep users engaged, we want to vary the responses. For special occasions like birthdays, people are used to repetitive “Happy birthday!” comments on their posts, but for other conversations, a range of suggested responses might make more sense.
Consider the worst-case scenarios
Think about the stress cases, the rare scenarios that could make our feature feel inappropriate or upsetting. For example, showing suggested content on a story about a tragedy is potentially offensive, regardless of whether our suggestions are accurate or relevant.
Pay attention to other products’ suggested responses
By interacting with suggested content on various tech products, it’s easier to glean what feels natural, and to understand user expectations — and, conversely, to feel out what might be inappropriate or unnatural to suggest.
Own your human-centered non-technical role in a technical space
It can sound overwhelming for a non-engineer to work on products like suggested responses, which rely on machine learning to get the targeting (who should get this suggestion) and ranking (which three suggestions should be shown) right, but it’s important to remember that Content Designers have an elevated role in a space where it’s challenging to explain how something works: The more complex the feature, the more our empathetic, inclusive, and comprehensive thinking skills must shine.
See this article and others like it at the new Facebook Design website.