How to Meet User Expectations for Artificial Intelligence

Research-based recommendations to create a human-centered AI experience

Kathy Baxter
Apr 12, 2017 · 9 min read

For those unfamiliar with artificial intelligence (AI), it is the simulation of human intelligence (i.e., learning, reasoning, self-correction) by a computer system or machine. AI has been around since the term was coined by John McCarthy at The Dartmouth Conference in 1956 but it has seen a recent resurgence in interest because there is now more data and computing power available than ever. Neural networks that couldn’t work a few years ago are working overnight and that’s due to greater computing power.

Salesforce Einstein

This is an exciting time for those working in AI but there is a lot we still don’t know. New blog posts, articles, and news stories are published everyday on the topic but much of it is more speculation and opinion than actual empirical research.

To ensure we are making data-driven decisions, the Salesforce User Research and Analytics team conducted a literature review of internal and external user research on AI. Kathy Baxter, Greg Bennett, Yakaira Núñez, and Mru Kodali have combined our own research insights with published research from academia and industry to identify the core expectations users have when interacting with any AI system. Based on those expectations, we have provided recommendations for how AI systems should be designed with users in mind, regardless of tool (e.g., chatbot, case classification) or domain. This is a rapidly evolving field and as such, these expectations and recommendations will continue to evolve with technological advancements and user experiences over time.

1. Know your limits

The Assist chatbot clearly lists the things it can help with.


  • Be prescriptive: If you do not have the ability to intelligently support free-text, clearly and concisely communicate what is possible (e.g., “Which model are you referring to? A, B or C?”). Think internationally. You need to know how to respond if the user is trying to communicate in a language you cannot communicate back in.
  • Convey level of certainty: Using words like “suggested,” “hint,” or providing confidence scores conveys the level of confidence a user should put into the recommendations and not be overly disappointed when it isn’t correct.
  • Take responsibility: Never blame the user when things don’t go as planned. “I’m sorry but I seem to be having difficulty” lets the user know the system realizes it is to blame, not the user.

2. Establish immediate value

KLM’s chatbot allows customers to manage their travel needs


  • Be accessible: It’s just as important for AI systems/tools to be accessible as any other product. Unfortunately, not all chatbots are accessible. Before launching your smart tool into the world, make sure it is universally accessible. These two posts are great resources to get you started: How to Describe Complex Designs for Users with Disabilities and 7 Things Every Designer Needs to Know about Accessibility.
  • Offer an escape hatch: One of Nielsen’s 10 heuristics is “user control and freedom.” When the system fails, user needs a way out. There should always be a way to decline/undo any suggestions or changes the system makes. Chatbots should also offer a way to redirect to humans at any point by asking for one (e.g., submit a case, post to the user community). Sentiment analysis to detect when the user is angry should automatically connect the user to a human. Forcing the user to engage with the AI, especially when it is wrong, will only result in the user avoiding the system (e.g., turning off the suggestions, abandoning the product).

3. Build trust over time

DoNotPay is a lawyer bot that helps people fight unfair parking tickets.


  • Don’t be creepy: You wouldn’t expect nor want a new human administrative assistant rescheduling meetings with your leads or answering emails for you without your direction. The same is true for an AI assistant. Knowing too much is creepy and inserting itself unrequested is annoying, especially when it is wrong. Any AI system should begin by making suggestions and limiting what it offers to do. Demonstrate learning over time and offer to do more as accuracy increases. One example of this is MuseBot which recommends daily activities to enrich your child’s life. It can actually predict lifetime outcomes based on what it learns but does not share that with parents since that could cause more harm than good.
  • Be transparent: Users are uncomfortable with a “black box.” They want to understand the inputs that went into making the recommendations and have the ability to edit incorrect information that could negatively influence the accuracy. Amazon’s recommendation system is a great example of showing customers why a particular recommendation was made and allowing them to edit the information to improve recommendations.
  • Take feedback: Users will not trust a system it cannot give feedback to. Users should be able to evaluate the accuracy of the AI’s predictions and tell it to course-correct, as well as govern how information is shared between parties (both internally and externally) throughout the auto-sync of information. Netflix allows customers to rate movies, which are then taken into consideration for future recommendations.
  • Have a guardian: Although some users will intentionally try to break your AI (e.g., Tay), most users expect companies to have safeguards in place to protect their data and to interact with them legally and ethically. Since AIs are designed to be constantly learning, they may stray considerably from the guidelines their programmers initially gave them. As the complexity increases, they will become “black boxes” to human understanding. It would be virtually impossible for any human to monitor these constant changes, understand, and determine if they are legal or ethical so having an AI guardian will be important to protect users and maintain trust.

4. Respect social norms

Joy can respond to the user’s mood


  • Don’t interrupt: No one likes to be interrupted. AI systems shouldn’t interrupt a user’s task and chatbots shouldn’t interrupt a user mid-speech.
  • Detect mood: Responding to a serious customer issue with smiling emoticons or jokes will only anger the user. Bots must detect emotion and respond accordingly, including escalating to a human when the user is angry.
  • Context matters: Users expect humans and systems alike to remember previous interactions and content. AIs should also leverage contextual information like location (e.g., airport), time of day (e.g., midnight), and issue types (e.g., flight cancellation) to prioritize and respond accordingly. Is the user looking for an entertaining exchange or is s/he trying to immediately rebooked a cancelled flight? The AI must be able to tell the difference.
  • Be polite: Many people say, “Thank you” in response to a bot’s assistance. The bot needs to understand these types of courtesies and how to respond (i.e., “You’re welcome”). Equally, it should respond politely but firmly when the user becomes abusive to avoid normalizing abusive interactions.

5. Have personality

Starbucks’ bot matches their brand and uses beautiful photos of their products to communicate with customers.


  • Be authentic: The voice and tone of your bot should match your brand. Users will pick up on inconsistencies (e.g., informal slang, jokes and emojis from a very formal financial services company) and can raise alarm bells. Starbucks’ bot matches their brand and uses beautiful photos of their products to communicate with customers.

Einstein teams across Salesforce are levering these guidelines as they design and develop new features. These are living guidelines, meaning that we are constantly evaluating, adding, and fine-tuning them. We’d love to hear what guidelines or insights your team has developed when developing AI tools/systems!

Thank you Ian Schoen and Raymon Sutedjo-The for all of your feedback!

Follow us at @SalesforceUX.
Want to work with us? Contact
Check out the
Salesforce Lightning Design System

Salesforce Experience and Design

A collection of stories, case studies, and ideas from Salesforce design teams

Thanks to raymonst

Kathy Baxter

Written by

Architect, Ethical AI Practice at Salesforce. Coauthor of "Understanding Your Users," 2nd ed.

Salesforce Experience and Design

A collection of stories, case studies, and ideas from Salesforce design teams

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade