When AI Gets It Wrong, Will It Be Held Accountable?

Do existing legal standards cover the right to contest decisions made by an AI?

RAND
RAND
5 min readJun 19, 2024

--

Digital image of scales. Image by PashaIgnatov and Just_Super/Getty Images
Image by PashaIgnatov and Just_Super/Getty Images

Elina Treyger followed the news with a growing sense of unease. Every day seemed to bring new examples of artificial intelligence making important decisions in people’s lives. What happens, she wondered, when it costs someone a job, or flags an innocent person as a fraud?

Treyger is a senior political scientist at RAND, but she’s also a lawyer. And what she really wanted to know was, Will people sue an algorithm? Will juries assign blame to something with no motives whatsoever?

She and a small team of researchers at RAND decided to find out. They designed a survey to test whether people are any less likely to challenge decisions delivered with the cold certainty of an AI. The results underscore the important role the American legal system can play in protecting people from algorithmic harm. The people in the survey were perfectly willing to take the computers to court.

“The legal system incentivizes good behavior and exposes bad outcomes through legal accountability,” Treyger said. “There’s been some concern that, if you’re on the receiving end of a bad algorithmic decision, you might not even know whom you could sue. But, as it turned out, at least in our experiment, that didn’t stop people.”

It’s not hard to find examples of bad algorithmic decisions. A few years ago, an automated system wrongly sent thousands of Michigan residents to collections for unemployment fraud. AI systems trained on historically biased data have recommended disproportionate jail time for Black defendants. A system trained on male-dominated employment data learned to penalize resumes from women.

But when the stakes are that high — when someone’s freedom or financial well-being are on the line — what recourse do people actually have when an AI gets it wrong?

When the stakes are high — when someone’s freedom or financial well-being are on the line — what recourse do people actually have when an AI gets it wrong?

The European Union recently gave people the legal right to get an explanation for any AI decisions that go against them, and to contest those decisions in court. Nothing that explicit exists in the United States. Policymakers here have mostly focused on regulating AI systems up front, making sure they cannot cause catastrophic harm before they go online. Addressing bad outcomes afterward has fallen in part to individuals being willing to take their chances in court.

But will they? Treyger is not the first legal scholar who has worried about that. For one thing, it’s not at all clear who the responsible party would be. The developers who wrote the code? The company that used it? It’s also often hard to know why an AI made a particular decision, which makes it tough to prove that it’s wrong. As a 2021 paper in Columbia Law Review put it, machine-made decisions are often “technically inscrutable and thus difficult to contest.”

Treyger and her team fielded their survey to provide the first nationally representative look at what people will actually do when faced with an unfair AI outcome. They asked 5,000 respondents to consider two scenarios.

In the first, a very well-qualified candidate applies for a job, makes it through the interviews — but then doesn’t get hired. The second scenario raises the stakes: An unemployed worker applies for benefits, gets rejected — and then gets flagged for potential fraud. For both scenarios, some of the respondents had a human making the decisions, and some had a computer.

Those who got the computer were much more likely to say the process was unfair and produced inaccurate results in both scenarios. They also were roughly 10 percentage points more likely to say it wasn’t transparent enough. The results point to what the researchers described as an “algorithmic penalty.” People seem willing to give human decisionmakers some leeway, even when they disagree with their decisions — but not computers.

The researchers then asked the respondents what they would do if they were the people in the two scenarios.

Even in the unemployment scenario, in which the outcome was not just wrong, but harmful, a third of those with a human decisionmaker said they would do nothing. Fewer than a quarter of those with a computer in the mix were so willing to let it slide. They were much more likely to say they would appeal, and slightly more likely to say they would sue. Respondents in both scenarios also were much more likely to say they would join a class-action lawsuit when the decisions were made by a computer.

“That’s encouraging,” Treyger said. “It means they’re not exempting algorithms from our general moral judgments. They’re willing to take legal action to redress algorithmic harms. That can be a real mechanism for accountability.”

White respondents tended to penalize the AI more harshly on most measures than non-white respondents did. The one exception was bias, but the differences were small. That may seem surprising; studies consistently show that AI systems trained on historical data learn to repeat historical biases, especially against racial and ethnic minorities.

But when the researchers dug into the survey data, they found that non-white respondents didn’t necessarily trust the AI more when it came to questions of bias. They trusted human decisionmakers less. They didn’t penalize the AI any more harshly because they didn’t think the humans would make unbiased decisions, either.

The survey results suggest that people will continue to look to the courts to defend their rights even in the era of AI. Algorithmic decisionmakers might not have the intent or state of mind of humans, the researchers wrote — but that won’t prevent legal action when they cause undue harm.

Policymakers working to regulate AI should consider not just problems of bias, but also of accuracy and transparency. And they should consider spelling out a specific legal right for people to contest AI decisions, much like the European Union has.

“In some settings, you would just presume that existing standards cover that,” Treyger said. “We have a lot of antidiscrimination laws, and it seems like those would be just as applicable in an algorithmic context. But it’s not always so clear. And so one implication of our study is, yes, we should establish pretty clearly these rights in the law.”

Michigan just got a hard lesson in how willing people are to go to court when an algorithm upends their lives. Thousands of residents filed a class-action lawsuit when the state’s automated unemployment system wrongly accused them of fraud. Earlier this year, the state finalized $20 million in payments to settle the case.

Philanthropy in Action

Funding for this research was provided by gifts from RAND supporters and income from operations.

Credits

Doug Irving (writing)

This originally appeared on The RAND Blog on May 9, 2024.

--

--

RAND
RAND

We help improve policy and decisionmaking through research and analysis. We are nonprofit, nonpartisan, and committed to the public interest.