How Algorithms Are Harming Child Welfare Agencies and the Kids They Serve

Algorithms have added more barriers to the evidence-based decision-making children in the care of these agencies so badly need.

Devansh Saxena
Data & Society: Points
6 min readApr 12, 2023

--

By Devansh Saxena and Shion Guha

Image: Seika via Flickr

Child welfare agencies in the United States are increasingly looking to invest in artificial intelligence systems to aid in their efforts to provide consistent and objective decisions, lower costs, and better outcomes for families. A 2021 ACLU study found that jurisdictions in at least 11 states are currently using predictive algorithms, while 26 others are considering the use of such tools. With severely limited resources, burdensome workloads, and high staff turnover, many agencies are looking to cross-sector public services data to develop these systems and are doing so through public-academic partnerships or by contracting with tech startups.

Child welfare agencies face unique challenges: They are often criticized when they separate children from their parents and place them in foster care; they also face scrutiny when they fail to remove children from unsafe homes. To address this two-sided problem, academic research has focused on designing AI systems to identify high-risk cases that might otherwise slip through the cracks and assessing the impact of algorithmic decisions on affected communities. But to understand how street-level decision-making is changing and whether AI systems are living up to the promises of cost-effective, consistent, and fair decision-making, we need a deeper exploration of how these systems are impacting the practice of social work and day-to-day bureaucratic processes.

To understand these complexities, we spent two years at a child welfare agency in Milwaukee, Wisconsin. There, we found that caseworkers were using several algorithmic tools on a daily basis to make high-stakes determinations, including assessing a child’s mental health needs, deciding who should care for a foster child, calculating the compensation offered to foster parents, and gaging the risk of sex-trafficking. While caseworkers are not trained to “think statistically” about data, algorithms, and their related uncertainties, they are legally mandated to input data, interact with algorithms, and make critical decisions. At the same time, algorithmic decisions that do not account for systemic constraints render themselves impractical and unusable, and lead to frustration. The result is that as currently implemented, algorithms are harming social work practice and the administration at the agency itself.

One such algorithm, known as CANS, is used to assess mental health services for a child based both on their needs (e.g., anxiety, depression) and risks (e.g., suicide risk, aggression). From there, CANS predicts two secondary outcomes: the level of foster care the child should be placed in, ranked from level one to five; and the amount of financial compensation due to the respective foster parents. That is, the higher the mental health needs, the higher the level of foster care the child is placed in — and the higher the compensation. But due to the system’s dearth of quality foster homes, caseworkers often manipulated CANS mental health scores so that foster parents would receive higher compensation, in turn dissuading them from placing the children in their care elsewhere. (The base compensation is low, and foster parents often end placements because they can no longer afford to care for a foster child.) Over time, such interactions with algorithms leads to cumulative distrust, as caseworkers learn that manipulating data inputs is the only way to achieve desired outcomes.

The child welfare system has long suffered from the problem of inconsistent decision-making with respect to child safety and family maintenance. This problem is further aggravated by the system’s chronic turnover: the majority of caseworkers quit within two years, which social work research suggests is about as long as it takes to learn the ins and outs of the job. Inexperienced caseworkers who use algorithms in their work might initially believe they enable them to be unbiased and objective in collecting data and making decisions. In fact, while the potential for bias is always a risk with algorithms, inexperience compounds the problem.

For instance, caseworkers continually collected data about families using quantitative risk assessments — data which was then fed to AI systems and used to make critical decisions and predictions about future cases. A closer look at the data collection processes reveals that this data was collected inconsistently and unreliably, and with inherent biases. Inexperienced caseworkers who are scoring variables such a “parent’s cooperation with the agency” and “stress level” tend to rely more on their impressions of the family rather than expertise developed over time. Ironically, risk assessments have worsened the turnover problem in child welfare services. Caseworkers leave due to their frustrations with practices and working conditions that have been exacerbated by the adversarial nature of risk assessments. With high caseloads, it is also easier for caseworkers to trust the algorithmic decision to get through the day — questioning it would add more work to their already over-full plate.

To improve their practices, child welfare agencies have continued to rely on counsel from the federal government in the form of initiatives, regulations, and evidence-based approaches. Recent federal initiatives have laid the groundwork for more algorithmic interventions in child welfare services. Yet federal directives have continually focused on the need for child welfare agencies to adopt data-driven practices without providing adequate guidelines that focus on the why and how. Consequently, child welfare service agencies in several states have rushed to adopt “something” in order to prove that they are employing scientific and evidence-based practices — but without ensuring that child welfare stakeholders have a strong understanding of how the model works, how to assure fidelity, and how to assess the model for issues of ethics and equity.

As agencies across the United States begin to adopt the new Comprehensive Child Welfare Information System (CCWIS) data model, it has paved the way for tech startups to start pitching CCWIS-based algorithmic tools to agencies as a way of meeting their accountability requirements. At the Milwaukee agency where we conducted research, there are serious provenance concerns about the data that is collected about children through the CANS algorithm, since it is heavily manipulated by both caseworkers and foster parents. While CANS was reappropriated to calculate foster parent compensations — with the belief that it would be fair and unbiased, leading to reduced costs over time — there were several cases where the compensation actually increased over time because the algorithm was being gamed. Another unintended consequence of exaggerating risk scores is that foster children are now being sent to services (like individual therapy) that they don’t necessarily need. This is an added financial burden on an underfunded system.

Rather than algorithms improving how this system functions, we see in this case how algorithms have added more barriers to the evidence-based decision-making the children in the care of child welfare agencies so badly need. Instead of unbiased data leading to smoother operations and a better ability to serve those in their care, overburdened caseworkers provide data labor for AI systems while putting in extra labor to work around those systems in ways that enable them to support families and meet policymakers’ demands.

From social media platforms, to gig work platforms, and now the public sector, there is a significant amount of invisible human labor that goes into maintaining and operating the AI infrastructure. While what we have described might be an unintended consequence of using algorithmic systems in this context, it is hard to call it unforeseen.

Devansh Saxena is a PhD candidate in the department of computer science at Marquette University and an incoming postdoctoral researcher at Carnegie Mellon University. His research focuses on studying algorithmic systems used in the public sector, especially the child welfare system. His current work examines collaborative child-welfare practice where decisions are mediated by policies, practice, and algorithms.

Shion Guha is an assistant professor in the Faculty of Information at the University of Toronto, where he is part of the Critical Computing Group and directs the Human-Centered Data Science lab. His work centers on algorithmic decision-making and public policy with focus on child welfare, criminal justice and healthcare.

--

--