How to Meet User Expectations for Artificial Intelligence

Research-based recommendations to create a human-centered AI experience

Kathy Baxter
Salesforce Designer
9 min readApr 12, 2017

--

For those unfamiliar with artificial intelligence (AI), it is the simulation of human intelligence (i.e., learning, reasoning, self-correction) by a computer system or machine. AI has been around since the term was coined by John McCarthy at The Dartmouth Conference in 1956 but it has seen a recent resurgence in interest because there is now more data and computing power available than ever. Neural networks that couldn’t work a few years ago are working overnight and that’s due to greater computing power.

Salesforce Einstein

Salesforce Einstein is applying AI across our CRM products to enable our customers to make better informed decisions and complete tasks faster. We want to make AI accessible to everyone, not just data scientists, because we believe that democratizing AI can improve everyone’s lives.

This is an exciting time for those working in AI but there is a lot we still don’t know. New blog posts, articles, and news stories are published everyday on the topic but much of it is more speculation and opinion than actual empirical research.

To ensure we are making data-driven decisions, the Salesforce User Research and Analytics team conducted a literature review of internal and external user research on AI. Kathy Baxter, Greg Bennett, Yakaira Núñez, and Mru Kodali have combined our own research insights with published research from academia and industry to identify the core expectations users have when interacting with any AI system. Based on those expectations, we have provided recommendations for how AI systems should be designed with users in mind, regardless of tool (e.g., chatbot, case classification) or domain. This is a rapidly evolving field and as such, these expectations and recommendations will continue to evolve with technological advancements and user experiences over time.

1. Know your limits

No AI is perfect, especially when it is learning, so it’s important to know the limits of the system and to stay within the bounds of what it can do to support the user. Don’t allow the user to make errors or force them to remember specific words the system recognizes (Nielsen’s 10 heuristics). Overreach and the user will be disappointed at best, angry at worst.

The Assist chatbot clearly lists the things it can help with.

Recommendations

  • Don’t pretend to be human: Not all consumers know what a “chatbot” is so it is especially important to communicate that an artificial assistant isn’t human. Pretending to be a human sets the user’s expectations too high and they may become angry if they feel they have been “duped” by the system.
  • Be prescriptive: If you do not have the ability to intelligently support free-text, clearly and concisely communicate what is possible (e.g., “Which model are you referring to? A, B or C?”). Think internationally. You need to know how to respond if the user is trying to communicate in a language you cannot communicate back in.
  • Convey level of certainty: Using words like “suggested,” “hint,” or providing confidence scores conveys the level of confidence a user should put into the recommendations and not be overly disappointed when it isn’t correct.
  • Take responsibility: Never blame the user when things don’t go as planned. “I’m sorry but I seem to be having difficulty” lets the user know the system realizes it is to blame, not the user.

2. Establish immediate value

Once you know the limits of what you can achieve, you must ensure that you can still provide value to the end user and demonstrate it immediately. The tools/systems cannot be simply usable or entertaining, they must also be useful.

KLM’s chatbot allows customers to manage their travel needs

Recommendations

  • Speed is critical: It should go without saying that if the AI can’t help the user complete their task(s) faster and/or more accurately, there is no point in having it. Having to edit suggested responses or repeatedly request information wastes time and the user’s goodwill. Brevity and clarity are also key here. Users should not have to spend additional time trying to understand the recommendation.
  • Be accessible: It’s just as important for AI systems/tools to be accessible as any other product. Unfortunately, not all chatbots are accessible. Before launching your smart tool into the world, make sure it is universally accessible. These two posts are great resources to get you started: How to Describe Complex Designs for Users with Disabilities and 7 Things Every Designer Needs to Know about Accessibility.
  • Offer an escape hatch: One of Nielsen’s 10 heuristics is “user control and freedom.” When the system fails, user needs a way out. There should always be a way to decline/undo any suggestions or changes the system makes. Chatbots should also offer a way to redirect to humans at any point by asking for one (e.g., submit a case, post to the user community). Sentiment analysis to detect when the user is angry should automatically connect the user to a human. Forcing the user to engage with the AI, especially when it is wrong, will only result in the user avoiding the system (e.g., turning off the suggestions, abandoning the product).

3. Build trust over time

It can take years to build your brand and earn your customers’ trust but in a world where anything can go viral, it can be seriously damaged in days or minutes.

DoNotPay is a lawyer bot that helps people fight unfair parking tickets.

Recommendations

  • Don’t be evil: When a group of scientists and technologists were recently asked about grand challenges for the 21st century, concerns around fairness, ethics, and protecting humans in AI systems were frequently cited. Users expect AIs to make the right recommendations in terms of not only accuracy but also based on ethics. A great example is DoNotPay, a lawyer bot that helps users fight unfair parking tickets, gets landlords to repair their tenet’s buildings, and helps refugees apply for asylum.
  • Don’t be creepy: You wouldn’t expect nor want a new human administrative assistant rescheduling meetings with your leads or answering emails for you without your direction. The same is true for an AI assistant. Knowing too much is creepy and inserting itself unrequested is annoying, especially when it is wrong. Any AI system should begin by making suggestions and limiting what it offers to do. Demonstrate learning over time and offer to do more as accuracy increases. One example of this is MuseBot which recommends daily activities to enrich your child’s life. It can actually predict lifetime outcomes based on what it learns but does not share that with parents since that could cause more harm than good.
  • Be transparent: Users are uncomfortable with a “black box.” They want to understand the inputs that went into making the recommendations and have the ability to edit incorrect information that could negatively influence the accuracy. Amazon’s recommendation system is a great example of showing customers why a particular recommendation was made and allowing them to edit the information to improve recommendations.
  • Take feedback: Users will not trust a system it cannot give feedback to. Users should be able to evaluate the accuracy of the AI’s predictions and tell it to course-correct, as well as govern how information is shared between parties (both internally and externally) throughout the auto-sync of information. Netflix allows customers to rate movies, which are then taken into consideration for future recommendations.
  • Have a guardian: Although some users will intentionally try to break your AI (e.g., Tay), most users expect companies to have safeguards in place to protect their data and to interact with them legally and ethically. Since AIs are designed to be constantly learning, they may stray considerably from the guidelines their programmers initially gave them. As the complexity increases, they will become “black boxes” to human understanding. It would be virtually impossible for any human to monitor these constant changes, understand, and determine if they are legal or ethical so having an AI guardian will be important to protect users and maintain trust.

4. Respect social norms

Although a chatbot shouldn’t pretend to be a human, it must still respect social norms to avoid being perceived as “rude” or having uncomfortable interactions.

Joy can respond to the user’s mood

Recommendations

  • Convey listenership: If the bot is limited to canned responses and isn’t utilizing natural language processing (NLP), it can provide entire sentences instantly in response to a user’s request, but this instant response will feel rude, as if the system isn’t “listening” to the user’s request. Speed is critical, but taking natural pauses and using varied language to show understanding of the request is expected for courtesy. Consider including filler words such as, “Ah, okay. Let me check,” or “I’m still here, just checking,” or “Still getting results” to fill time for long-tailed recall. Consider modeling turn-taking structure and phrasing around best practices for transactional interactions.
  • Don’t interrupt: No one likes to be interrupted. AI systems shouldn’t interrupt a user’s task and chatbots shouldn’t interrupt a user mid-speech.
  • Detect mood: Responding to a serious customer issue with smiling emoticons or jokes will only anger the user. Bots must detect emotion and respond accordingly, including escalating to a human when the user is angry.
  • Context matters: Users expect humans and systems alike to remember previous interactions and content. AIs should also leverage contextual information like location (e.g., airport), time of day (e.g., midnight), and issue types (e.g., flight cancellation) to prioritize and respond accordingly. Is the user looking for an entertaining exchange or is s/he trying to immediately rebooked a cancelled flight? The AI must be able to tell the difference.
  • Be polite: Many people say, “Thank you” in response to a bot’s assistance. The bot needs to understand these types of courtesies and how to respond (i.e., “You’re welcome”). Equally, it should respond politely but firmly when the user becomes abusive to avoid normalizing abusive interactions.

5. Have personality

Every system/tool we use should be a pleasure to use.

Starbucks’ bot matches their brand and uses beautiful photos of their products to communicate with customers.

Recommendations

  • Be delightful: Chatbots in particular should have a personality. Ideally, a chatbot should be as interesting to interact with as a human. We identify with characters that reflect back to us who we believe we are or who we want to be. The chatbot should be able to adapt to varying conversational styles and preferences and adhere to them. This includes how to respond with the user becomes aggressive or intentionally tries to break it. Ask AmEx gives a variety of options to choose from and goes above and beyond by reminding the user of upcoming holidays.
  • Be authentic: The voice and tone of your bot should match your brand. Users will pick up on inconsistencies (e.g., informal slang, jokes and emojis from a very formal financial services company) and can raise alarm bells. Starbucks’ bot matches their brand and uses beautiful photos of their products to communicate with customers.

Einstein teams across Salesforce are levering these guidelines as they design and develop new features. These are living guidelines, meaning that we are constantly evaluating, adding, and fine-tuning them. We’d love to hear what guidelines or insights your team has developed when developing AI tools/systems!

Thank you Ian Schoen and Raymon Sutedjo-The for all of your feedback!

Follow us at @SalesforceUX.
Want to work with us? Contact
uxcareers@salesforce.com
Check out the
Salesforce Lightning Design System

--

--

Kathy Baxter
Salesforce Designer

Architect, Ethical AI Practice at Salesforce. Coauthor of "Understanding Your Users," 2nd ed. https://www.linkedin.com/in/kathykbaxter einstein.ai/ethics