Elizabeth Brico’s daughters were removed from her custody in April 2018, in part, she believes, because of an algorithm.

Brico was living with her in-laws in Florida while her husband grappled with mental health issues. While they didn’t always get along, a tense peace had held. But when arguments threatened to boil over, Brico took a short trip to Miami to let things cool down.

“I had my phone on me and remained in text/phone contact with my in-laws, but shortly before returning they called the child abuse hotline and reported that I had disappeared without contact to use drugs,” says Brico, who has been in pharmacotherapy and counseling for five years for previous substance abuse. “My mother-in-law told them I was a heroin addict. I’d given birth to Anabelle while on prescribed methadone in Florida, so there was a record of my OUD treatment. The investigator made no attempt to contact me. She filed for a shelter petition and I learned about it the evening before the court hearing.”

Brico believes that an algorithm unfairly reinforced historic factors in her case, which weighed heavily with those who made the decision to remove her children. Florida, she writes, is one of the first states in the country to implement predictive analytics as part of their child welfare system.

The system repeats a common prejudice: It negatively impacts poor and vulnerable communities who rely most heavily on public government programs.

The Florida Department of Children and Families (DCF) used data compiled by a private analytics company called SAS, which relies on publicly available information including criminal records, drug and alcohol treatment data, and behavioral health data from 2007 to mid-2013, the company’s website states. The objective was to profile families and identify factors that might predict and prevent the abuse or death of a child. The findings happened to match previous less-scientific, qualitative research conducted by the DCF over the past two years, Interim Secretary Mike Carroll said in a 2014 interview.

The system repeats a common prejudice with algorithmic systems rolled out by the government: It negatively impacts poor and vulnerable communities who rely most heavily on public government programs. Privately insured patients were excluded from the system because privacy laws protected their personal data.

Across the U.S., similar systems are being tested with varying degrees of success. In Hillsborough County, Fla., a real-time algorithm designed by a non-profit called Eckerd Connects uses data to identify children at risk of serious injury or death — yet a similar algorithm from Eckerd Connects was shut down in Illinois because of its unreliability. Another algorithmic system that SAS devised for child protection was shuttered in Los Angeles after concerns that it generated an extremely high rate of false positives.

Technology companies like to describe algorithms as neutral and autonomous, but there is growing concern over the bias of these kinds of systems. Algorithms regularly misfire: Amazon had to shelve its own recruitment algorithm because it was biased against women; an algorithm used to calculate recidivism was shown to be biased against people of color; and small businesses were kicked off a USDA food stamps program because of questionable fraud charges.

Just like the humans that designed them, algorithms and machine learning carry biases wherever they go. A system designed today to find the new CEO of a company, for example, might use recent data on top performers. But that data would suggest a lot of older white men, perpetuating structural issues of sexism and racism that have held back better candidates.

“All of these algorithms are going to have to be audited in the future, or there is going to be some sort of regulation.”

One solution to implicit bias is to have algorithms audited independently, a trend growing in the private sector. Entrepreneur Yale Fox took this approach when setting up his business Rentlogic, which uses public inspection data to inform an algorithm that grades landlords and their buildings in New York City. Fox worked with a specialist consultancy run by Cathy O’Neil, whose 2016 book Weapons of Math Destruction first brought the issue of algorithmic bias to public awareness. Much like a financial audit, O’Neil Risk Consulting & Algorithmic Auditing (ORCAA) reviews algorithmic systems for impact, effectiveness, and accuracy.

“Algorithms are eating the world,” says Fox. “When you have a machine determining things that impact humans lives, people want to know about it. All of these algorithms are going to have to be audited in the future, or there is going to be some sort of regulation.”

Fox says that the independent audit involved a simple process of allowing methodical access to its code. He feels it helped build trust among his company’s stakeholders, and gave his brand some transparency (while not opening the system completely in a way that would allow landlords to game it). The audit will be repeated next year, after incremental changes and updates to the system.

Another company, a recruitment firm called Pymetrics, created its own auditing tool called AuditAI, and then shared it on Github for others to download for free. As recruiters, their audit involved checking the information of tens of thousands of candidates against a series of tests for any bias.

“We might have a version of the algorithm where 80 percent of Indian women are passing, but only 40 percent of African-American women are passing,” says Priyanka Jain, head of product for Pymetrics. “We can see there is a discrepancy in the different pathways relating to ethnicity. So what we would do is say, ‘Okay that algorithm isn’t fair,’ and go through all the different versions to find one that meets the laws laid out by the federal government’s Equal Employment and Opportunity Commission.”

Experts say it is difficult to estimate how many algorithm-based decision-making tools have been rolled out across the sprawling state and federal divisions of the U.S. government, yet there is growing evidence that poorly designed algorithms are a problem.

Anyone developing algorithms, as well as those auditing them, should look at the raw input data used to train the machine learning process, analyzing each part of the algorithm, the design process, and source code for bias.

“In many real cases of bias and injustice, the decisions are made in this way,” says Jeanna Matthews, associate professor of computer science at Clarkson University and co-chair of the U.S. Public Policy Committee of ACM working group on algorithmic accountability and transparency. “And I think there is a growing awareness of that, but there are a lot of forces that don’t want to open those boxes.”

A September 2018 report from AI Now, an NYU research institute examining the social implications of artificial intelligence, found that many health care agencies were failing to adequately assess of the true costs of these systems and their potential impact on citizens.

“Many states simply pick an assessment tool used by another state, trained on that other state’s historical data, and then apply it to the new population, thus perpetuating historical patterns of inadequate funding and support,” the report states.

In August 2017, two legal scholars sought to penetrate government “black box” algorithms regarding criminal justice by filing 42 open records requests in 23 states seeking “essential information about six predictive algorithm programs.” State entities often said they had no applicable records about the programs, or that contracts with third-party vendors prevented them from releasing information regarding the algorithms.

A separate study in 2016 examined the billions of dollars distributed to local law enforcement from federal government. The research found that purchasing decisions were effectively creating policy — rather than vice versa — with no safeguards to ensure that local representatives or the public were involved in buying the technology, and without any protocols around how the technologies were used.

There have been numerous calls for algorithmic accountability in the government space, including from AI Now, which has developed Algorithmic Impact Assessments as a practical framework for public agency accountability. A 2018 report by Harvard University’s Berkman Klein Center, which studies the internet and society, made recommendations for how government could be more transparent and accountable for the algorithms it uses. Step one involves establishing technical standards that encourage transparency and eliminate bias, and step two introduces procurement guidelines to ensure that government software meets those standards. And between those two steps is algorithmic auditing. Setting technical and ethical standards is an opportunity to rethink some of our social processes, and build a more inclusive vision of the future.

Matthews believes that anyone developing algorithms, as well as those auditing them, should look at the raw input data used to train the machine learning process, analyzing each part of the algorithm, the design process, and source code for bias. Many of the problems are caused by biased processes and social assumptions of the past that have been transposed into automated systems. They might seem efficient, but they actually perpetuate damage and prejudice.

“Researchers took pictures of dogs and wolves and used a machine learning process to see if a computer could identify the different animals correctly,” says Matthews. “In that example, the machine would highlight the snow around the wolf as the reason it was a wolf.

“So I say to people: you will be a dog in the snow. You will say it doesn’t matter that I’m in the snow — I’m still a dog, not a wolf. But the question then is, one, do you know that decision is being made about you, and second — will anyone care?”

Elizabeth Brico is still fighting for custody of her kids, seemingly a dog caught in snow. She says few people have questioned the methods of the CPS, because the agency’s motivation is to protect children.

“The problem is that they are not using evidence-based methodology,” she says. “They are targeting poor people, people of color, and people with substance use disorders. Decision makers are not held accountable, and both kids and parents are being traumatized by unnecessary separations.”

Update: This story has been updated to clarify that SAS provided data rather than a system to the Florida DCF.