Towards Human-Centered AI: UX and Design Strategy
Advancement in cognitive technologies is causing us to rethink and restructure every experience we build as a dynamic model based on machine augmented user interaction. The success of machine learning is highly dependent on user engagement which in its turn relies on seamless cognitive augmentation that fulfills user goals. Understanding both sides of this process during design and development stages leads to products with AI grounded in human needs and solving for them in ways uniquely possible through machine learning.
AI-first Products and Digital Transformation
AI-first products are apps and software services that utilize machine learning to augment cognitive processes of their users when solving for them. They are designed to establish and sustain positive feedback loops known as “more data >> better AI >> more mass adoption, repeat.” Agility and design sprint further extend positive impact of digital loops making them adaptive to users’ needs.
Personalized UX with AI
Websites are getting smarter and take into account multiple constellations of user data points to enable more personalized experiences for visitors. Multiple data points extracted from user research help obtain creative insights into what users are more likely to be looking for. With machines taking over parts of the user research process, the ability to scale use cases and make them hyper-personalized becomes more viable and accessible for more companies.
AI chatbots are driving UX personalization across industries. With the help of the AI, chatbots or intelligent agents are evolving and becoming more intelligent in dealing with complicated tasks. Rather than using scripted dialogues to answer back to users, intelligent agents can answer users’ queries in real-time.
Aligned with AI UX has a deeper influence and it constantly aims for utmost customer satisfaction by including and focusing on all aspects of “customer experience” involving an iterative process which includes the following points:
- Understanding the context of use
- Specifying the users’ requirements
- Providing with design solutions
- Evaluating against requirements
For AI-first products, close knitted collaboration between designers, developers, and data scientists is a must. Engineers need training data and UX teams help acquire it and define the expected customer’s outcome including troublesome for AI subjective outcomes like: which recommended movie was desirable. The UX team aims to define these criteria by employing human understanding. Engineers utilize training data and well-defined outcomes for different inputs they feed into the machine learning algorithm. After collecting an initial data set, the engineers can train the algorithm and UX teams can start user testing with early prototypes validating the first trained models with real people.
AI Lens: Incorporating Information Architecture for AI and UX
Progression of AI depends on human-centered design along with the relevant information architecture. Information architecture is important for content mapping/tagging based on the relevance to the user while AI generates a relationship between the data by identifying trends. Cross-linking both data layers enables seamless function and helps to focus on the end users requirements and fetch effective results by allowing an interface to navigate through massive data.
Human-Centered Lens: Aligning AI with UX
If AI and UX are not properly aligned cognitive augmentation produces distortion that results in misuse and frustration. AI by itself cannot determine which problems to solve. If it is not aligned with a human need, a resulting powerful system addresses marginal issues. Users often develop mental models that suit their imaginary theories about AI resulting in misuse and lack of trust. This phenomenon is usually caused by design flaws related to AI targets being under-defined causing lack of understanding in user role of calibrating that system.
In order to thrive, AI needs to acquire a multi-dimensional approach that includes social and technical perspectives. Machine learning is the science of making predictions based on patterns and relationships that have been automatically discovered in data. From model development to the source of data, samples, and descriptors, all the way to successful criteria, every facet of AI is affected by human judgment. And approaching AI from a human-centered perspective allows qualitative elevation (that for businesses translates to quantifiable returns).
AI aligned with UX translates in powerful cognitive augmentation and addresses a real human need the way humans need it addressed. Google Clips team defines this alignment as “let people do what they do best and let machines do what people do worst … because in order for us to build trust in the impact of AI we must feel reassured, included and informed.”
The above graphs generated by Google Clips illustrate that while most products have at least some learning curve, with the added overhead of AI, it’s especially important to ‘spend’ wisely on your user’s cognitive load. When the context of use is novel to the user [figure A], bias for dependability is warranted. When there are a lot of new UI elements to learn [figure B], accentuating familiarity of use cases is needed. Dynamic functionality of the product [figure C] calls for reinforcing familiar patterns in UI.
Minimizing complexity in the UI and consequentially lowering user’s cognitive load by adding contextual controls and cognitive support can dramatically elevate user experience. It is often useful to guide AI by designing a model of a theoretical human expert when defining which problem AI needs to solve and its success criteria. If we can’t generate such a model AI will most likely fail our customers.
When designing for AI first products it is important to consider interaction, transparency, engagement and optimize adaptively.
UX teams help AI developers decide what to optimize for. Providing meaningful insights about human reactions and human priorities can prove the most important job of a designer in an AI project.
- Optimizing for recall means the machine-learning product will use all the right answers it finds, even if it displays a few wrong answers. Let’s say we build an AI that can identify cat pictures. If we optimize for recall, the algorithm will list all the cats, but dogs will appear in the results too.
- Optimizing for precision means the machine learning algorithm will use only the clearly correct answers, but it will miss some borderline positive cases (cats that kinda look like dogs?). It will show only cats, but it will miss some cats. It won’t find all the correct answers, only the clear cases.
AI and Human Trust
Trust between humans is based on different criteria than trust between humans and machines. Humans trust one another based on factors like reliability, sincerity, competence, and intent. On the other hand, humans’ trust in machines depends on accuracy, consistency, and fallibility, and for many on human ability to successfully interpret how the system works. AI introduces a lack of interpretability along with cognitive augmentation which may lead to a flawed algorithm learning to do the wrong thing causing undesirable consequences and destroying human trust.
Lack of trust in AI is a fundamental UX design challenge.
AI has become a focal component of many companies’ customer-centered strategies. It enables greater personalization, more access to support, and faster service — all components of a better customer experience. It is difficult for customers to have a positive experience with AI without trusting it. One way of addressing this issue is to clue customers into the AI operation. There is no need to explain all intricacies of AI/ML theory but rather make customers aware of the data used by machine learning in a relevant context. Such transparency allows for understanding that in turn facilitates positive cognitive exchange that trains AI and elevates user experience. In self-driving cars positively interlinked AI and UX are represented by passenger screens that improve customer trust based on sharing relative cognitive context, in this case, car’s understanding of its location and surroundings.
It is often useful to visually differentiate AI generated content. In many cases, we use AI and machine learning to dig deeper into data, and generate new and useful content. Although it is tempting to believe that a given model can solve any scenario and AI generated content can prove extremely useful for some people, but in some cases, these recommendations and predictions extend beyond acceptable fuzziness of expected accuracy. This issue can be additionally negatively reinforced by not providing enough data or feedback for the system to learn from. Letting customers know that content is AI generated allows them to adjust their expectations resulting in diminished frustration. Providing feedback opportunity allows AI to collect data that it needs to improve.