UX Design for Artificial Intelligence (Pt 1)

Joe Veltkamp
MyTake
Published in
5 min readNov 4, 2019

--

We all have come across products using (or claiming to use) artificial intelligence as a driving force of the cool tech beneath it. This promise of optimization has led to an explosion in AI driven products and features as people seek to harness the new power it provides. On top of that, AI is a very broad term that is applied to everything from chatbots to self-driving cars.

All of this change has naturally created a lot of confusion around what AI is and how exactly do humans want to interact with it. This is where design can step in and help create a path forward as we implement new features. Designers have the challenging (and fun!) task of translating complex technical capabilities into understandable and trustworthy tools.

While the tech changes fast and the learning curve can be steep, I have slowly started to assemble some guiding principles that I have outlined here to help navigate this ever changing landscape. Working in this field for the past 5 years has allowed me to see these principles benefit the product design process at both small (Aible) and large (Salesforce Einstein) AI products.

Design principles for AI

Guidance Through Intentional Friction

AI is a wonderful tool that has had many positive benefits, but it can also have a negative effect if users are not aware of the impact of their decisions.Users should never feel like enabling an AI tool is a set it and forget it action. They need to realize that some oversight will be needed and collaboration (through feedback and maintenance) will be a requirement to a successful implementation.

As designers, we can help mitigate uninformed decisions by identifying the personas who is tasked with enabling and maintaining the product and ensuring that they are fully aware of large decision points. Identify parts of your user journey where impactful decisions may appear deceptively easy and then intentionally slow down the users flow. Use alerts and education to amplify the importance of the action so users ask themselves “am I ready to take this step?” before proceeding.

While it’s not an AI product, Mailchimp did a great job in a similar area through the use of an animation to emphasize the magnitude of the send campaign decision. It added a weightiness to the decision. In the world of AI we are often doing similar things where we are telling an ML model to automatically do something (potentially to thousands of users) and the person making that decision needs to be fully aware of the scale of the decision to start that flow.

Build Trust through Transparency

Once a model is deployed it users will start seeing information surfaced through alerts, predictions on outcomes, or recommended actions to take to achieve a goal. This can be highly valuable, but will be totally wasted if a user doesn’t trust the AI.

This distrust is warranted due to the following problems

  • Many models suffer from the “black box” problem where predictions or recommended actions are surfaced with little explanation about why the user should believe it
  • Models are often make wrong predictions, we are working in a field of probability :), or the data could be driving a inaccurate outcome.

As designers we should identify the areas we are asking the user to trust an automated action and increase transparency through explanations on the AI decisions. While the model that is being used may prevent us from giving full explanations (the black box issue) we can surface non-model explanations to help users make better informed decisions on whether or not to trust the prediction.

Based on the persona the explanations we surface can be minimal (age of the prediction, the top factors) to complex (performance analytics, the model metrics, and training data).The ultimate goal is to give users the needed clarity to act confidently (or identify faulty data and model behavior).

AI is Collaborative

A model is only as good as the data it was trained with and an effective model can only do so much if it is deployed on low quality data. If your users are involved in any step of a model building or implementation process, they should be fully aware that what the decisions they are making has an impact on the AI behavior. The last thing you want is a user to face negative consequences because they were unaware that their decisions lead to a poorly trained or implemented model.

Designers should seek to emphasize this collaborative process through repeatedly showing the connection a users action has to the final output. This can be tricky if there are long delays in the feedback loop, but we should constantly look for areas where users actions impact the output and highlight the users responsibility and ownership.

Ensure Users Feel in Control

One of the most common bits of feedback is that users will not trust a product if doesn’t feel controllable. In the world of B2B products, the people purchasing and adopting these products realize their job may be on the line if something goes wrong. So they need the assurance that they will have final control.

As designers we should define in advance what management and tracking tools the users will have to reassure them that they will continue to have ultimate control once their AI is deployed. This can be as simple as performance tracking and alerts with an on/off switch or as complex as A/B testing, model feedback loops and controls that allow them to investigate why something happened.

In the next post I will cover how I have learned to collaborate with devs and data scientists while building AI tools.

Thanks to Icons8 for providing the awesome free illustrations used in these posts :)

--

--

Joe Veltkamp
MyTake
Writer for

Montana born. Taught by Grizzlies. Currently designing and drawing things in San Francisco. Product Design at Aible, prev. Salesforce Einstein. 3VBar.com