UX challenges for AI/ML products [2/3]: User Feedback & Control

Nadia Piet
Published in
5 min readFeb 25, 2021


This is the 2nd part in the “UX challenges for AI/ML products” series. Read Part 1 on Trust & Transparency here.

Theme 2: User Feedback & Control

The users must feel like they’re in charge of the system. People have justified concerns about giving up agency to (semi-) autonomous systems and sharing the personal data required to make them work well. Respecting the human need for autonomy, users need a way to exercise consent and control over the system and their data based on their individual and contextual needs.


Allowing the user to teach the machine with implicit and explicit feedback loops and collecting direct data input.

Design considerations: Building in implicit and explicit feedback loops. Considering the latter, give the user a way to quickly indicate if this is helpful “yes or no”, then gradually ask for more feedback like “why or why not”, and how the system could have acted better.

[Caption: Reporting inaccurate prediction scores — Source: Zendesk ]

Zendesk provides service providers with a predicted satisfaction score on their customer’s support tickets, so they can get a quick overview of the people who are most upset, or most pleased with their service, and can act accordingly. Next to the prediction there is a button to indicate when the model’s predictions are wrong, and why.

→ Google Cards
Google Cards exemplifies a simple way to collect valuable feedback to reward or penalize your model. Last month they added more granular feedback methods to train the algorithm on what an individual user wants and help them get more relevant suggestions.

Gradual user feedback in Google Cards

Questions around Machine Teaching + User Feedback

  • Which implicit and explicit signals can you collect as user feedback in your interface?
  • How should you and your model assign meaning and metrics to user feedback?
  • How might you ask your user for more detailed input to train the model?


Giving users the controls to customize the model to their needs and intervene with the data or model if needed.

Re-imagining the Goals and Methods of UX for ML/AI, Philip van Allen

Design considerations: Allow users to set intentions and configure parameters.

→ Personality Editor
This speculative concept by Philip van Allen imagines what user controls for AI applications might look like. In this case, it’s a personality editor, but we can imagine similar interfaces for other applications.

Questions around User Controls & Customization

  • How much autonomy to act should your system have?
  • How can we give the user controls to intervene with the model when necessary?
  • To what extent can the user customize the model? How might we give the user controls to tune their model to their individual needs?


The need to collect, handle, and store user data with care, be transparent about who can access what data and why, while acknowledging their ownership.

Design considerations: Communicating benefits per data share, allowing easy opt-in/out in a modular way, being cautious in sharing data, and making terms & conditions legible.

→ Ever
Ever is an app to store and organize personal photos. When news came out that user data was used to train facial recognition algorithms, people were not pleased. It introduced this screen to communicate how it uses data, and gets explicit user consent, or allows them to easily opt-out.

→ Data acquisition
A recent controversy happened around a tech giant hiring contractors to collect more data for its facial recognition models. Homeless people were targeted because they seemed more likely to participate in exchange for a nominal cash reward. Their faces were captured, while they were asked to play a game, and used as training data without their informed consent.

Questions to ask around User Privacy + Security

  • Can we do the same with less user data?
  • How can we make consent more explicit and justify value for data?
  • How can the user view, edit and wipe their data profile if it does not represent them?
  • Is the data shared, sold, or used beyond the service itself?
  • How can we prepare for hacks, leaks, and malicious use?

I’m planning to launch an online course on UX of AI later in 2021. To be the first to know and receive an early-nerd discount, sign up here for updates 💌

About AIxDesign

AIxDesign is a place to unite practitioners and evolve practices at the intersection of AI/ML and design. We are currently organizing monthly virtual events, sharing content, exploring collaborative projects, and developing fruitful partnerships.

To stay in the loop, follow us on Instagram, Linkedin, or subscribe to our monthly newsletter to capture it all in your inbox. You can now also find us at aixdesign.co.