AI and human-centered design: on criminal justice and filter bubbles

We just posted the video for our latest AI in the 6ix (come to the next one!).

Our panel grappled with two use cases that have strong ethical implications: giving judges bail/sentencing recommendations and how to address social media filter bubbles that facilitate the spread of divisive politics.

The bailing/sentencing use case starts 7 minutes in with Vector Institute researcher David Madras giving an introduction. You can also check out his work with colleagues on fairness in this academic paper or this slide deck.

Lindsay Ellerby (Senior Design Director at Normative) walked us through the human-centered design approach to the problem, reminding us that “the future of design isn’t just designing for people, but for people who are being augmented by machine learning algorithms.”

Whether you’re working in design or AI, you have to examine the human context. In other words, you don’t get to just turn data into models. There’s an ethnographic aspect to the project: where does the data come from and how will your system’s recommendations get used? This isn’t only the foundations of developing sustainable systems that reflect our values: it’s also the key to unlocking the power of feature engineering. Too often companies assume that data science starts with data rather than engaging with subject matter experts to extract prior knowledge, design experiments, and clearly understand just what the proxies we optimize for are measuring.

Human judges are harsher when they’re tired and hungry (this image from the Economist summary of Danziger et al 2011)

While a great deal of the evening was talking about the ethical issues involved in AI, it’s not as if humans have no bias. For example, Danziger et al (2011) show that if you’re up for parole, you definitely want your hearing to be at the beginning of the day or right after the judges have a meal. Humans get harsher as they get tired and hungry. Here’s how Masha Krol (Experience Designer at Element.ai) put it: “This is another example where we need to be really careful of reinforcing that bias…because if you just take the data and in fact don’t take that qualitative insight into account you might in fact reinforce a behavior that isn’t necessarily just.”

One of the interesting challenges in designing AI products is that people need to understand their own role. Dawson Guilbeault (Design Director at Scotia Digital Factory) described the need for an “understanding that data isn’t truth, in that [judges] are part of the creation of that data, so that the judges’ sentences are informing the models and there’s a feedback loop.”

Pathological feedback loops are what’s really behind filter bubbles: you think you’re just liking the news your like-minded friends are sharing but you’re actually walling yourself off to alternate views and letting in more extreme stuff. We start tackling filter bubbles with a definition of the problem just after 18 minutes into the video.

Eli Pariser coined the term filter bubble in his 2011 book on the topic, which is still very relevant today

I think the most common comment I heard afterwards was that people liked the question, “What level do you address the filter bubble problem?”(Here’s where that starts in the video.) Here’s how the team voted:

  • Give users information about their bubbleness and actions to correct: 1 vote
  • Change things on the backend, like how the news feed works: 1 vote
  • Have an in-house team charged with monitoring and mitigating filter bubbles: 1 vote
  • Filter bubble issues should come under government regulation: 2 votes

What I think is interesting here is how technologists do a better job seeing unintended consequences. One of my favorite ways of thinking about this is Cathy O’Neil’s Weapons of Math Destruction, where she posits that certain areas like education, justice, housing, employment, and finance matter to people’s lives so much that we should start with the assumption that our technologies might have terrible unintended consequences.

So as soon as you’re thinking about bail or sentencing, think: lots of stuff is going to go wrong. But what if you’re a social media company? Are you responsible for predicting that your platform may, say, end democracy? David has a nice discussion of how successful businesses figure out how to scale their intended consequences but “negative externalities” don’t tend to scale as well.

How do we anticipate consequences for our recommendations? Does Netflix get more of a free pass for problematic recommendations than Facebook?

If you like to fast forward to audience participation sections, our Q&A period starts at 34:47. It features questions on explainability, how you even define fairness, and whether we can design systems to counteract problematic human behavior. It concludes with a nice discussion on the problem of “anchoring” — that is, let’s say we give you a number for bail and/or reasons why to set it at a certain amount. Maybe in your system the human can override the recommendation and reasoning, but the fact remains that their actual decision will be tied to what they saw. For designers interested in UI challenges, that’s a good one. Can you elicit a judge’s reaction, then give them a recommendation/rationale and show how all of that interacts over time?

Tyler Schnoebelen (@TSchnoebelen) is principal product manager at integrate.ai. Prior to joining integrate, Tyler ran product management at Machine Zone and before that, founded an NLP company, Idibon. He holds a PhD in linguistics from Stanford and a BA in English from Yale. Tyler’s insights on language have been featured in places like the New York Times, the Boston Globe, Time, The Atlantic, NPR, and CNN. He’s also a tiny character in a movie about emoji and a novel about fairies.