MLUX Case Study with LinkedIn: Designing Human-Centered AI experiences and the STRAW Framework

Roli Khanna
Machine Learning and UX
6 min readJan 24, 2022

Applied AI in industry is traditionally siloed: computer vision and NLP, recommendations and forecasting. Consequently, AI experiences are often only partially relevant to users’ contexts. There’s a disconnect between the results that AI surfaces and the expectations of the users. The team at LinkedIn took on this challenge and looked inward to solve the AI conundrum by collaborating with everyone involved in building, designing and launching such experiences.

Carolyn Chang, Principal UX Researcher, LinkedIn and Christine Liao, Product Design Lead, LinkedIn investigated the core reasons of why AI is so frequently out of context for users, and created a set of design guidelines that a product team can follow to mitigate bad AI experiences. They also lead a series of workshops that promoted empathy for end users and making their AI journey smooth and appropriate.

One of the most important findings from this project is, in simple words:

Trash in = Trash out.

This article covers in three sections the nuances of what this means for different product features and how perfectly well intentioned AI solutions might surface as out-of-context experiences for users:

  1. Researching our own AI
  2. Designing for AI
  3. Shifting the AI culture

Researching our own AI

The first step to finding the pain points of any feature is to conduct some good old user research. Interviews were conducted with engineers, designers, data scientists, taxonomists and people outside the company to better understand the reason behind bad AI relevance experiences.

The research pointed to three major reasons behind the “Trash in = Trash out” phenomenon:

  1. Lack of clarity: Lack of clarity is insufficient context of what the users mean. For example, a user may describe their current job role in a custom fashion that may not reflect standard industry designations. This lack of clarity would lead to incorrect recommendations for job roles since the AI in the backend learned from standard industry designations.
  2. Incorrect weighting: There might exist a disconnect between how a user defines themself and the way the algorithm weights their profile. For example, the titles that users give themselves (especially in “open to work for certain roles”, versus the role they currently have) might not be weighted by the algorithm the way the user intends to.
  3. Lack of data: A user might not enter comprehensive information on a platform due to a variety of reasons, ranging from data privacy to interest. Lack of data may consequently lead to the AI providing inaccurate results, since it doesn’t have all the data about the user.

Designing for AI: The STRAW Framework

STRAW Framework from LinkedIn UX Team outlines a set of 5 design guidelines that a product team can follow to incorporate holistic and relevant AI experiences for users.

Now that we’ve highlighted the reasons why bad AI relevance experiences occur, let’s talk more about the solution. Carolyn and Christine came up with the framework STRAW (Standardized Transparent Realistic Approachable Worthwhile) that outlines a set of 5 design guidelines that a product team can follow to incorporate holistic and relevant AI experiences for users.

Standardized: A product’s design should leverage the data. For example, ensuring a user enters a standard role/title instead of a custom one neatly adds clarity to data while making the user’s experience effortless.

Transparent: Helping users understand how we use their data is essential to establishing trust. Stating exactly why we’re asking for data and what we might do with it is essential to fostering confidence in the system: Give data to get data

Standardizing inputs adds clarity to data, and user expectations!

Realistic: It is essential to set users’ expectations, and not overpromise on relevance. Also called relevance humility, this approach helps users be more forgiving when we miss the mark. Setting real expectations by using better tone and language to define features is paramount, for instance using “Recommended for you” instead of, “Jobs for you”.

Approachable: Approachable design helps users feel comfortable in giving us good and sound data. Clarifying data in the form of Guidance and Examples greatly helps in gathering sound data, and prevents from overtaxing members.

Ensure every action a user takes is approachable and worthwhile — see here where the prompt clarifies why the user might want to specify their title with a reasoning.

Worthwhile: Ensuring that every action that a user takes is worthwhile by effectively using their data across all products establishes an understanding that entering accurate and relevant information helps a user have a better and seamless experience.

Leverage data, provide immediate feedback, break data silos.

Shifting the AI culture

The AI Empathy Workshop was conducted across diverse disciplines such as AI, engineering, product, design, research, data science and more, where it encouraged having empathy for our users that experience trouble with AI. The motivation behind this workshop can be summed up in a simple but powerful quote: “Building relevant products is not just an engineering problem, it’s everyone’s responsibility.”

The project saw 3 workshops so far with 150 participants. It helped connect engineers to what our end users really want by setting the following goals:

How might we…

  • Help AI engineers better understand their users?
  • Help non-engineers better understand our AI?
  • Empower teams to make better AI related decisions?
  • Build deeper relationships between AI and Design/UXR?
  • Make this fun?

The workshop was preceded by an unmoderated research study, where a user was asked to pull up their LinkedIn profiles and asked specific questions, particularly relating to their relevance experiences in the application.

Participants in the workshop were then asked for ideal job recommendations for the same user (from the study) might be. This helped participants dig deeper into why the AI makes certain recommendations, and why they might not be relevant to the user in question.

Not only did this exercise help participants from a wide range of disciplines align their mental models about the end users’ experience, it also helped them brainstorm possible solutions to mitigate such problems.

Watch Carolyn and Christine’s talk on our youtube channel:

About the Machine Learning and User Experience (“MLUX”) Meetup

We’re excited about creating a future of human-centered smart products, and we believe the first step to doing this is to connect UX and Data Science/Machine Learning folks to get together and learn from each other at regular meetups, tech talks, panels, and events (held remotely).

Interested to learn more? Join our meetup, be the first in the know about our events by joining our mailing list, watch past events on our youtube channel, and follow us on twitter (@mluxmeetup) and Linkedin.

--

--