“photo of girl laying left hand on white digital robot” by Andy Kelly on Unsplash

10 UI Patterns For Human Friendly AI

Designing for AI-powered products poses new challenges for designers as we shift our focus from task-oriented interfaces to goal-oriented machines. It’s easy to lose sight of users as we design for technology that makes decisions on behalf of users. Which leaves many wondering: can a “thinking machine” be made user-friendly?


Machine bias is a hot topic in our industry. Since Daniel Kahneman and Amos Tversky revolutionized our understanding of human judgement processes, different industries have been obsessed with cognitive bias mitigation and reducing risks that affect human rights. And for the tech industry, AI has been a good candidate.

Take Flow Health for example, a company powered by AI that personalizes medical decision-making by analyzing massive amounts of data. They promise to eliminate biases that physicians bring with them when they diagnose patients and plan treatments, improving medical outcomes as a result. It’s thrilling to think about the amount of good machine learning can bring to our everyday lives by overcoming some of the shortcomings in human brains.

But machine learning is dependent on the data humans provide, so it’s impossible for them to be immune from biases in the data humans feed them. Microsoft’s Tay bot, for example turned from an innocent, happy, human-loving bot to a misogynist, nazi racist in less than 24 hours because it learned from Twitter users who interacted with it and they chose to teach it the worst of humankind. Another example with far reaching implications: the software that predicted and assigned risks to future criminals that learned to be biased against black people based on the past data that was fed to the system.

Machines aren’t perfect decision makers and we humans aren’t either. But if machines and humans collaborate, biases can be monitored, detected, and managed better than ever before, and designing the interface that facilitates that collaboration is critical for ensuring the transformation of our products from stubborn assistants into user-friendly agents.

In many cases and in the examples below, AI-powered apps don’t have devastating results when they make mistakes. It’s important to weigh the cost of false positives and false negatives in products so we can determine which design patterns are more appropriate for the risk at hand, but we’re not going to discuss that here today and will only focus on the UI patterns.

With that in mind, here are 10 UI patterns for a user-friendly AI:

1. Criteria Sliders

In many products, we use machine learning algorithms to help predict an outcome or make recommendations. When the product is analyzing data sets that are invisible to a user, a data criteria slider can help individuals adjust and fine tune the recommendations based on criteria that is meaningful for them.

StyleFinder’s criteria slider allows users to fine tune the application’s recommendations.

Stylefinder is a project developed by Myplanet’s Concepts team and powered by IBM Visual Similarity Search & Visual Recognition. The system allows the user to connect their Pinterest boards to Stylefinder and then builds a profile of the user’s style affinity with relevant inventory recommendations.

Keep in mind that defining the criteria users can manipulate is an information architecture exercise. Just like when we group data in a website, label categories, create menus, test our IA with card sorting, and employ other user testing methods, we need to make sure the criteria that users can manipulate is mapping correctly to the data that machine is using in their algorithms. The data that the machine uses and the behind the scenes algorithm might be much more complicated than what a user thinks and is capable of understanding and choosing, but that’s where we conduct user research to ensure we define sliders that are meaningful and effective.

2. Like/Dislike Buttons

When a product makes recommendations based on past behaviour from an individual, a feedback button allows users to fine tune the recommendations.

Netflix users can provide feedback to the system by choosing like/dislike button and rating movies.

Netflix gives you recommendations for movies it thinks you will enjoy watching. The system is basing its judgement on movies you have watched in the past, but there are a lot of edge cases that can affect the accuracy of these recommendations. What if you shared your account with a roommate for a while? Or have movie nights with your friends and every week they pick a theme so you have watched several movies you didn’t really like?

Netflix is not perfect in making recommendations, but the feedback button allows users to engage in a conversation with the product so that they can, over time, fine-tune the recommendations not just based on what they do, but also on what they say! But notice how Netflix doesn’t just provide the like/dislike button and allows the user to provide a qualitative feedback like “I don’t like this because this movie is not for me.”

3. Feedback Questionnaire

When a product makes recommendations based on past behaviour from an individual, a feedback questionnaire allows an individual to provide more qualitative feedback so that the system can fine tune it’s recommendations more effectively.

Sometimes a simple yay or nay is not helpful. I’ve been using the 7 minute workout app for quite some time now, but the app hasn’t really learned my preferences when it’s curating exercises. Mostly because the context in which I exercise varies quite a bit. For example, sometimes I don’t like a certain exercise because I’m recovering from an injury or because I am exercising after my breakfast and I don’t want to jump up and down. A qualitative feedback questionnaire allows users to explain why they don’t approve or like a recommendation from the machine.

Google Translate [Left]: Users can provide meaningful qualitative feedback on a translation provided by the machine. Google Search Predictions [Right]: If users find a search prediction inappropriate, the system requires that they add meaningful insights into their feedback.

4. Confidence Status & Tips

In many cases, not only do the users not really know how the system is making predictions, but they also don’t know how much confidence the system has in its predictions. One way to overcome this is to have the machine talk!

A confidence status will help users decide how much trust they can put into a recommendation. This helps build trust with users and also encourages them to help the system build more confidence by contributing with more data.

The North Face app helps users narrow down their choice by asking a series of questions to determine a need criteria.

The North Face app asks users a series of questions to narrow down their options. When users answer all questions, the app compares the requirements against available options and displays a list of recommendations. These options don’t always perfectly match a user’s criteria but the UI contains a confidence status to help a user choose between the perfect match and other options.

Netflix also shows a match percentage to help users choose between options with confidence.
Photofeeler allows users to see a confidence status as well as tips on how they can narrow down the confidence interval.

Photofeeler is an application that allows you to upload pictures and get voters to rate your photos based on relevant criteria. Photofeeler relies on artificial intelligence to monitor the quality of votes and to detect voter fraud. It also analyses votes based on factors like voter styles to optimize the accuracy of results. When you receive your ratings the UI generates a confidence interval tooltip with notes on how you can increase the confidence.

5. Link to How The System Works

Sometimes users want to know how the machine thinks. What science is being used to train the machine, what data are being used to inform insights, and how precise the predictions are. Just like in our everyday lives when we collaborate with other humans and fill in each other’s gaps by understanding individual thought processes, it’s important to provide that information for users of an AI powered app.

IBM Watson’s Personality Insights Tool provides a link to the behind the scenes where the science behind the analysis is explained for users.

Personality Insights uses linguistic analytics to form individual characteristics by analyzing written content such as emails, blogs, tweets, and forum posts. These findings can be used to gain a better understanding of the consumer behaviour and customer’s needs and wants. This is an example of an agent that thinks in the background. There’s not much a user can do to monitor the process other than providing the source of data but they can review what happens in the background and make more informed decisions about using the product for their own needs.

6. Informative Modals

Many AI-powered apps rely on information and data that the user will provide as part of setting up their profile. Informative modals can be used to instruct users about the impact of their decisions when they choose to dismiss a data point that might be critical in ensuring a high confidence outcome from the machine.

Reflectly users are presented with statistical data that can inform them about their decision to set up reminders.

Reflectly is an app journal that uses AI to provide daily reflections and thoughts when you’re going through a tough time. When a user is first setting up a profile the choose whether or not to receive reminders. If they choose not to receive reminders, the app presents users with more information about why they might want to consider opting in. Even though in this application, the modal is used to incentivize users to return to the app, the use of modals to present users with data and rationale behind why the machine believes this is a better decision can effectively prevent users from making mistakes.

7. The “Why” Insight

Sometimes it’s critical to inform users about why they see a specific AI generated content because users can decide if they want to trust the content or not or adjust their own behaviour to receive better results.

Instagram clearly labels why a content is suggested to users: “Based on people you follow”

Instagram’s discover feature curates a list of videos, stories and photos based on videos you have watched, people you follow, posts you saved, etc… The “Why” hint is added context to the generated content.

Tooltip clarifies why the algorithm has adjusted the data.

8. Risk Alert

Users should receive warnings about potential risks associated with the prediction or assessments that the system has made. Ideally we should inform users about how they can mitigate the risk.

Ada warns users to take the diagnosis results with a grain of salt.

Ada is an AI-powered app that allows users to report physical symptoms relating to their health, and based on their pre-created profile it analyzes the symptoms to create a diagnosis. There are many ways the diagnosis could go wrong, for example the user could have entered the wrong data or not reported all symptoms. The app recognizes this risk and alerts users with a message to inform them about the risks so they can make informed decisions about their reports.

9. Opt In/Out Toggle

In most AI-powered applications there are both risks and benefits involved with having an algorithm take over with the decision making process of humans. Depending on the severity of the value the feature might provide and the risks involved, we might decide to leave it up to users to choose to opt-in and out of a smart feature.

Tinder users can opt in or out of the Smart Photo feature.

Tinder users can choose to opt in or out of the Smart Photo feature, which uses an algorithm to analyze the number of right swipes a photo gets and the behaviour of each individual profile to determine what order your photos would appear to them so that you can increase your chance of getting right swipes. Even though this algorithm may deliver results for some users, there are still many reasons why it can go wrong. Regardless of whether or not this feature is biased or not, allowing the users to opt in and out of the feature provides the flexibility for users to trust the algorithm’s decision whenever it makes sense to them.

10. AI Content Region

AI-generated content should be in distinctive UI regions in applications where users are expected to perform important time-sensitive tasks. In this article, the Google Design Team explains how habituation contributes to good design. But when we leverage machine learning to conduct tasks on behalf of users, we need to keep in mind that we might be forcing users to visually search the UI for familiar content and previously learned information architecture, which can cost users time, subject them to errors, and contribute to cognitive strain. If we want to ensure a dialog between machine and human, we need to make sure users are not having a hard time using the application and can easily distinguish between what the machine thinks and does and what choices they have made on their own.

Quick Access generated content is presented in a clear region in the UI and doesn’t over take the entire interface.

Quick Access is a feature that intelligently predicts and surfaces files based on who specific files are frequently shared with, when relevant meetings occur, what files are used at specific times of the day. This smart feature is not taking over the entire interface giving users the choice to engage with the content when they see fit.


Machines aren’t perfect, but we can’t deny their positive impacts on our lives and how they shape our world by their decisions. Responsible design requires that we think critically about our role in building products that think ethically and improve our lives. The design patterns we are familiar with and have used for our human-centred products will continue to evolve and adapt to machine learning products that are human friendly. For now, keep these 10 patterns in your back pocket when designing for AI-powered apps that make decisions on behalf of humans and you’ll be making positive steps towards a user-friendly AI future:

  • Criteria Sliders
  • Like/Dislike Button
  • Qualitative Questionnair
  • Confidence Status
  • Help Documentation
  • Informative Modal
  • The “Why” Insight
  • Risk Alert
  • Opt in/Out Toggle
  • AI Powered Content Region

Thanks for reading. Be sure to 👏 and share the post so others can find it!

Interested in designing for AI-powered interfaces? Apply to join the Myplanet team! We’re hiring Conversational, Interaction, and Visual Designers.