UX challenges for AI/ML products [3/3]: Value Alignment

Nadia Piet
AIxDESIGN
Published in
9 min readMar 15, 2021

--

_________________________________________________________________

PSA: It is a key value for us at AIxDESIGN to open-source our work and research. The forced paywalls here have led us to stop using Medium so while you can still read the article below, future writings & resources will be published on other platforms. Learn more at aixdesign.co or come hang with us on any of our other channels. Hope to see you there 👋

Theme 3: Value Alignment

Deploying AI systems across layers of society will affect the lives of individuals and groups across the globe in different and sometimes unexpected ways. Operating at an unprecedented scale and complexity, we must be mindful of biases, risks, system dynamics, and consequences, to make thoughtful trade-offs in our AI applications. Striving for value alignment between man and machine (and those operating the machine!) by integrating ethics at the core of our projects is required to shape this technology to help humanity. Otherwise, what’s the point?

7. COMPUTATIONAL VIRTUE

Translating subjective human needs, values, and experiences into algorithmic parameters the model can optimize for.

Design considerations: Benchmarking usefulness based on use case rather than what’s happening in research. Making conscious data and model decisions. Working with domain experts. Sometimes the model is nowhere near perfect, but as long as it’s better than humans (it’s more accurate and/or faster and cheaper) there’s value.

→ Google Clips
Google Clips set out to develop a camera that would automatically capture memorable moments in the life of young parents. An incredibly subjective and context-dependent task, it required lengthy human discussions to agree on what the qualities of memorable moments were, and relied on extensive human training to guide the machine’s learning to adopt this understanding.

Questions around Computational Virtue

  • How do we translate subjective human experience into models and algorithmic parameters?
  • What can you learn from doing it manually?
    What objectives are worth optimizing for? How to protect the inefficiencies that make the human experience meaningful from being optimized to no end?
  • Based on whose values and which ideologies do we benchmark and evaluate our models?

8. BIAS + INCLUSIVITY

Mitigating harmful bias and guarding inclusivity in data and models to ensure fair treatment for all.

Design considerations: Checking for common unconscious bias, regular, external auditing, and having an intersectional team and user testing group (diverse in terms of gender and race, but also age, digital literacy, sexuality, level of education, lifestyle, political/religious beliefs, and other variables that might be relevant for your case).

Source: Financial Times

→ Hiring gender bias
The historical data we feed AI to learn about the world, might not always represent the present we inhabit, or the future we wish to manifest. A clear example is recruitment models discriminating against women and minorities. The algorithm doesn’t favor anyone, in particular, It simply learns from past data in which the majority of hires were white males, and perpetuates the pattern. Ensuring fairness in your model requires regular audits to detect and correct any harmful bias.

Questions to ask around Bias + Inclusivity

  • Does our team represent our audience and wider population?
  • Who may be disadvantaged by your system? Who benefits most?
  • Who has been historically excluded or oppressed in this system? How do we prevent existing patterns in society from leaking into our future models?
  • Have you considered the diversity of your data, features, and audience? In terms of gender, race, age, abilities, sexuality, geography, beliefs, income, education level, digital literacy, and more.
  • Where do we make the trade-off between harmful discrimination and feature selection for accuracy?

9. ETHICS + (UN)INTENDED CONSEQUENCES

Unprecedented scale, speed and complexity call for a new level of thoughtfulness and responsibility in anticipating impact and (un)intended consequences.

Design considerations: Recognizing good intentions does not equal positive impact. Be critical about anticipating potential consequences from various lenses, for example by using the consequence wheel, which is at times non-compatible with the ideology of capitalism and Silicon Valley’s ‘moving fast and breaking things’.

→ Affectiva
Affectiva is a market-leader in emotion recognition software that originated from MIT’s Media Lab. While a lot of the tech industry is catching up to make facial recognition work well across ethnicities, Affectiva faces the additional challenge of recognizing emotions across cultures. Committed to serving clients across the globe, they had to ensure their models would generalize well across a diverse population. It requires extensive efforts and resources to understand facial and physiological expressions, in order to build data sets that are inclusive.

→ MIT Moral Machine
The MIT Moral Machine was built to gather human perspectives on moral decisions made by machine intelligence, such as self-driving cars. Anyone visiting the website is presented with a scenario in which the car has messed up, and now (in this case, guided by you) it has to choose who to kill. While the majority of us have intuitions, such as killing an older person over a child, making them explicit, (considering differences across cultures, and potentially activating them as a blueprint for machine moral decision-making) is a reality that’s pretty hard to come to terms with.

→ Predicting mental health
While some moral choices appear obvious, many challenges around AI ethics sit in a gray area, where right or wrong it is not always obvious.

Cross-Domain Depression Detection via Harvesting Social Media. Source

Aspiring to provide preventative and early mental health care, healthcare providers have successfully built models that predict the likelihood of depression and off-set of manic episodes in people with bipolar disorder, from social media data. Even operating from the noblest intention, making such inferences poses complex challenges.

What if the model is wrong and the person, or others, start questioning their well-being, it could become a self-fulfilling prophecy? If the model is right would insurers (be allowed to) treat you differently knowing you’re at risk of mental illness? If your health insurer can infer such predictions from public social media data, could your employer? Could this information affect the hiring and firing process? Could advertisers use the predictions to target those in a vulnerable state. It’s hard to ‘unsee’ data and well-intended endeavours bring about a range of ethical dilemmas. Where do we draw the line? In which cases is it (un)ethical to collect, infer, and act on such data?

Questions to ask around Ethics + Unintended consequences

  • How can we anticipate unintended consequences?
  • If it becomes a huge and widespread success, which downsides might that have? What becomes redundant as a result?
  • How can we adapt to maximize positive human and sustainable impact?
  • How to respond well to negative experiential, cultural, societal impact?
  • Who owns and has access to the data, the models, knowledge, and computational power? How do we deal with power shifts as a result?
  • Considering there is no universal moral framework, which ideologies do we evaluate our impact by?
  • Is it more dangerous to release research with the risk of malicious use, or to keep research private and centralize power?

UX of AI worksheet

If you are prototyping or building an AI-driven application, you can use the worksheet below or draw it out and capture your first thoughts on how you’re going to deal with the UX challenges.

Outro

As you see, building human-centered AI applications is not an easy feat, and we’ve only just scratched the surface.

It’s easy to become paralyzed by the scale, complexity, and urgency of these challenges. But considering you’ve come this far, I suspect you are not. Or perhaps you are, but are channelling your courage to engage with it for that exact reason!

I also suspect you have developed all sorts of thoughts and ideas around the challenges over the course of reading this. I’d love to hear about those.

As it’s set to impact all of us across life stages and at scale, designing human-centered AI is arguably one of the most interesting and important challenges of our time. It will require creativity, thoughtfulness, collaboration, and a commitment to shaping the future we want to live in. We’ll need people from all walks of life and all areas of expertise to figure this one out. Are you up for the task?

If you decide to take part (in shaping the future, instead of delegating it to others and watching it happen), here are a few suggestions on how to get started.

  1. Close to home
    Some of these challenges might seem like remote futures, but they’re not. Question if anything within your user journey is already influenced by algorithms, automation, and human-AI-interactions. Consider in what ways AI is likely to show up in your context and how these challenges appear alongside its opportunities.
  2. Read up
    A handful of projects that go into the user experience design of AI that I recommend you to read are included on the next page. Beyond that, there are great learning resources about AI from a more general perspective such as Andrew Ng’s AI for Everyone on Coursera or elementsofAI.com.
  3. Take a stance
    In your work, as a designer or otherwise, how can you take on an active role in shaping these interactions with human values at its core? How can we move past principles, and begin building best practices around them? Besides creators, what is our role as users, as consumers, in demanding these elements of human-centered AI?
  4. Join forces
    Join fellow practitioners, reach out, share your knowledge and ideas, put them into practice.

If you’re curious to explore the potential of AI within your projects, check out the AI meets Design toolkit for hands-on tools, exercises, and worksheets that integrate with the design thinking process.

This is only a first block for you to build on. No one has the answers to what it means to build human-centered AI and how we might act on it, but we’re committed to looking for evolving them through design, dialogue, and democratization. I hope you join us!

Recommended Reading

I’m planning to launch an online course on UX of AI later in 2021. To be the first to know and receive an early-nerd discount, sign up here for updates 💌

About AIxDesign

AIxDesign is a place to unite practitioners and evolve practices at the intersection of AI/ML and design. We are currently organizing monthly virtual events, sharing content, exploring collaborative projects, and developing fruitful partnerships.

To stay in the loop, follow us on Instagram, Linkedin, or subscribe to our monthly newsletter to capture it all in your inbox. You can now also find us at aixdesign.co.

--

--

Nadia Piet
AIxDESIGN

Designer & researcher focussed on AI/ML, data, digital culture & the human condition mediated through computing