UX as a Lever for Ethics in AI Design with Professor Katie Shilton

Aparna Gokhale
Machine Learning and UX
7 min readJun 24, 2021

The Tech industry has broadly struggled with whether and how to talk about its values and its social obligation. Recently, a lot of attention is being given to tech-ethics, thanks to AI! Especially the subsection of AI fueled by Machine Learning using data about people. To dive deeper into this new realm, Katie Shilton, Associate Professor, UMD iSchool, walks us through her journey of how she got here and shares her insights with us. She currently leads the Ethics and Values in Design (EVID) Lab that guides technology design so that human values are as much a part of the process as values like speed and efficiency.

Professor Shilton’s talk focuses on how UX professionals can be instrumental in privacy and fairness in AI and where they can look for opportunities to build those values and to center them in design practices. Watch it for yourself here, or read the summary below:

Let’s first look at ML — the tech “neutrality” killer

It was very common, until recently, for technologists to claim that technology was a neutral tool. That it was up to users to do good and bad things with it. This point of view has been critiqued within technology studies for decades but it was still really pervasive within very technical fields like software technology. But increasingly, it is really rare to hear somebody claim that the technology or machine learning specifically is neutral or without human values. It has become clear that AI and ML reify existing biases and disparities in the data that it is trained on.

So how do we avoid the crisis of inattention of producing new forms of social control and inequality particularly when we didn’t mean to?

Katie has been focusing on this question. To further understand this, she did a participation observational study using anthropological methods to study how the engineers in a lab grappled with the ethics of the data collection they were performing. From this study came an organizing concept called “Values Levers.”

What are Value Levers?

“Value levers” are work practices, things that you do, or design practices that surface ethical issues within technical work and make particular human values relevant to technical work. These are “Aha” moments when ethical issues become relevant to design. The idea being that technical work can often sort of bulldoze over the social or the socio-technical work but there are moments in design when the social becomes really relevant. Finding these “Aha” moments and amplifying those has become a huge part of Katie’s work. Examples are things like interpreting policy for design, engaging with users, interpreting context documents among others.

Problems in AI Design: Privacy

Privacy is one of the first values that tend to surface when people start talking about the big data that fuels mobile applications and machine learning. So much of our data is tracked and shared! So what do we do about it?

Katie was really interested to see how mobile application developers are grappling with this question because mobile app developers have a lot of leverage to make ethical decisions about what data to collect through their apps, how long to keep it, and whether to share or sell it. She became really curious about when and how they talk and debate about that power. So for her study, she turned to two online forums where mobile app developers gather iPhone Dev SDK and XDA. Critical discourse analysis analyzes the way that people talk about their practices and this method was used to analyze the threads in these forums. She focused on how developers talk about what they do and how they justify and legitimize their practices and what they do.

What they found was that for iOS developers, navigating Apple’s App Stores approval process was the single most common trigger for data discussions by A LOT! Developers would write in to get advice from their peers whenever their app got rejected from the App Store or when they got frustrated or they had trouble interpreting Apple’s policies. Interestingly, Android developers at the time did not face any policy constraints around data collection imposed by Apple. And they saw far less discussion among developers around privacy. However, there were data ethics discussions and privacy discussions in the Google forum triggered by user concerns. Users would request data protection features of particular developers. They would critique the permissions of a particular app. And in response, you would see developers put more restrictive data practices.

UX can introduce Privacy Levers

  1. UX professionals can be more attentive to the policies that surround data collection in their context. They can scaffold design and their setting in the organizational policy, for example, requiring things like privacy impact assessment for particular forms of data collection. Even though these might be local policies, they can be really impactful for creating debates around socio-technical and human values issues. They can look for the increasing number of federal and state laws that might apply to data collection in their context.
  2. Look for technical Constraints in your ecosystem or in the infrastructures that you’re working in. Is there something your team wants to do that is technically difficult and the kind of data that they want to collect is difficult to collect? If so, are there social or socio-technical reasons? Make sure you ask these questions.
  3. Incorporate the voices and concerns of the users in design. The more we get a broad diversity of voices in the design process, the more likely human values are to surface and share space with the technical values.

Problems in AI Design: Fairness

The second human values issue very salient to AI is fairness. The range of examples from the last five years plus some really important academic work have highlighted the ways in which AI learns bias and discrimination from the data that fuels it. Parole algorithms, Automatic image cropping, learning chatbots, HR tools, Ad delivery, and image search have all been impacted by the use of historical data which have historical biases built-in. For example, Amazon had a recruiting tool that learned to discriminate against people who had women’s colleges and women’s social organizations on their resumes because it was trained on resumes that the company already received for software engineering positions that historically came from men.

So how do we encourage the kind of reflection on the context of data, spotting and mitigating that kind of bias?

Katie’s student Karen Boyd worked on this very question in her dissertation. Karen was interested in how machine learning engineers grappled with bias in training data sets that they didn’t create but were inherited. As data sharing is pretty common in machine learning, a group of researchers proposed that the training datasets that are shared should come with a datasheet. A datasheet is a document that specifies the motivation, composition, collection process, and recommended uses.

Karen conducted a think-aloud study in which she gave ML engineers a problematic data set. She gave them a set of images of faces that appeared to be overwhelmingly white and male. And in addition, it was marked that the images have been taken without the explicit consent of the people that they represented. She then gave all of her participants a problem to solve with that data. Half of her participants were then also given data sheets that described the motivation, the composition, the collection process, and recommended uses of the data.

She found that the data sheets helped trigger recognition of the ethical issues. Participants that had the datasheet, the vast majority mentioned ethical issues in the datasheet unprompted without her having to ask or bring them up while the ML engineers without the datasheet recognize at least one ethical issue when directly prompted during the follow-up interview. So they try to do something about it or considered it. And in addition, the data sheets particularly supported engineers in something Karen calls “Particularization”. i.e figuring out what to do about an ethical issue once you’ve spotted it.

The “context document” is a concrete values lever for engineers to spot and problem-solve around ethical issues.

The critical role of UX in AI ethics:

Katie highlights that ML is never just a technical problem, but is always a socio-technical one. UX professionals are well trained and well-positioned to bring that “socio expertise” to ML projects. UX professionals and researchers are experts in context and can:

  1. Make ethical principles visible on design teams.
  2. Interpret ethical principles as design relevant
  3. Find places to concertize human values in actions, processes, and policy.

As we continue into the field as UX practitioners, it is important to remember we design the overall experience of the product, including the AI. Katie Shilton’s top takeaway for the MLUX community is to find others who support you in advocating for ethical principles in design, and championing human-centered AI design. Big thank you to Dr. Katie Shilton for sharing her knowledge and expertise with our group, and Aparna Gokhale for writing up this article!

About the Machine Learning and User Experience (“MLUX”) Meetup

We’re excited about creating a future of human-centered smart products, and we believe the first step to doing this is to connect UX and Data Science/Machine Learning folks to get together and learn from each other at regular meetups, tech talks, panels, and events (held remotely).

Interested to learn more? Join our meetup, be the first in the know about our events by joining our mailing list, watch past events on our youtube channel, and follow us on twitter (@mluxmeetup) and Linkedin.

--

--