Trauma and Automated Welfare Compliance in Australia

When the burden of proof is on those who need assistance, it can make a bad situation worse.

Lyndal N Sleep
Data & Society: Points
5 min readDec 14, 2022

--

They won’t even listen to me my situation is already I’m poor without being homeless I fear I will end up on the street the pressure is just to[o] much I don’t believe i owe this money i didn’t even receive that much from Centrelink.

I already suffer from major depression and anxiety [and] have suffered a previous trauma and this situation has made me feel like harming myself.

— Sally (not her real name), sourced from #Notmydebt, 2022

Photo by Riccardo Mion on Unsplash

On Saturday July 20, 2019, a woman named Sally received a notice from Centrelink, what was then Australia’s government agency administering social security payments to eligible persons. The notice said she owed them $24,000 for overpayment of the sickness benefit.

Sally was not alone. Between 2016 and 2020 in Australia, hundreds of thousands of social security recipients received automated debt notices as part of the new Online Compliance Intervention (OCI), commonly known as Robodebt. The OCI employed a simple algorithm that used weekly income to estimate yearly income. Because many social security recipients are in casual employment, their income is irregular. That meant many yearly income estimates were wrong.

For Sally, who suffers from depression and anxiety, the experience of being issued a Robodebt notice awoke feelings of vulnerability, distrust, and betrayal. The welfare system that was supposed to care for her made her feel unsafe — to the point that she felt like harming herself.

That welfare compliance is punitive in neo-liberal political cultures like Australia is not a new observation. That recipients don’t challenge incorrect decisions for fear of further scrutiny is also not new. Yet as the scale of Robodebt and its harms was thrust into the public spotlight, it generated compassion for service users in ways unprecedented in Australia’s recent history. Those who were typically shamed in the media — dismissed with titles like “dole bludger” and “welfare slut” — were instead empathized with as “battlers” and “underdogs” (terms of affection in Australia).

What Robodebt brought clearly into the public eye was that Australia’s welfare compliance systems are not primarily concerned about the health and wellbeing of service users, a dynamic that is not only replicated but accelerated when automated technologies are used. A class action suit which was settled by the government out of court (with the government required to repay $1.2 Billion (AUD) of inaccurate repaid debts), multiple Senate inquiries, and now a Royal Commission into the OCI, are opportunities to scrutinize the systematic violence caused by these neo-liberal welfare compliance systems.

Highlighting structural violence at this moment of technological change also serves as an opportunity to rethink some of the assumptions that have crept into welfare administrative processes. Social welfare systems like Australia’s deal with clients who are presenting for income support, rent assistance, or emergency payment, but may also have a history of trauma. This was Sally’s situation when she received her Robodebt notice. The notice told Sally that she needed to correct Centrelink’s records of her annual income, or be issued a debt; if she did not comply, the notice said she would need to repay tens of thousands of dollars. The burden of proof was placed on Sally, the person in need of assistance, rather than on Centrelink. The Robodebt calculation failed to take into account the full picture of her vulnerability, emotionally as well as financially.

Sally might have been better helped by a trauma-informed approach to service delivery. As social service workers have long recognized, those seeking help often have a history of trauma apart from the specific issue at hand. For example, a person who presents for addiction might also be a survivor of child abuse. Trauma informed services take this into account, and have safety, trustworthiness, transparency, collaboration, empowerment, choice, and intersectionality at their core. They employ processes that ensure clients have a choice over the treatment they receive and the way it is delivered. For clients with experience of domestic violence and sexual violence, it is essential that the services do not inadvertently replicate the controlling and harmful behaviour of the perpetrator: for example, that information gathering is not experienced as violation, and that officials are honest and straightforward so as not to replicate gaslighting.

The administrative processes that were used by Centrelink to inform Sally of her debt did not have safety, trustworthiness and transparency, collaboration, empowerment, choice, and intersectionality at their core. Instead, Sally was threatened that if she did not do what Robotdebt said, the penalty would be tens of thousands of dollars she did not have. This looks a lot like economic abuse, where a perpetrator threatens to steal from the subject if they do not comply with their wishes — making an already profoundly stressful situation even more untenable for someone like Sally.

As social services and welfare administration become increasingly digitized, there is an opportunity to think purposefully and practically about what values should underpin the digitization of administrative decision making in welfare states. Right now, in countries like Australia, those values lead to actions that are punitive, dehumanizing, and frugal. Those who present to the state for support are treated with suspicion and shame rather than with care and respect. A trauma-informed digital welfare state could help avoid harms like those Sally experienced, by considering the entirety of her experience and the ways it affects the support she needs.

What might this look like? In non-western states, emerging approaches offer possible alternatives to the punitive western neo-liberal model. For example, the South Korean welfare Blind Spot Identification System uses AI to identify individuals and families who are currently not receiving benefits but are experiencing hardship, and links them to services. The system uses proxies like a missed electricity or phone bill to identify those in need. It arose as an explicit (and well publicized) governmental response to a series of “two sisters” suicides, where malnourished mothers and daughters left suicide notes apologizing to debtors for not being able to keep up with payments. Unaware that financial support was available to them, these women were desperate to be free of their financial hardship. Such a response stands in marked contrast to Australia’s system, which uses automated technologies to identify people for their non-compliance to rules, identifying them instead based on need.

Dr. Lyndal Naomi Sleep is a postdoctoral research fellow in the Automated Decision Making and Society Centre of Excellence at the School of Social Science, University of Queensland.

--

--