Towards a Social-Relational Model of Digital Disability Classification

We need a system that can capture and respond to complexity in ways that place human needs at the center of technological design.

Georgia van Toorn
Data & Society: Points
9 min readOct 25, 2022

--

A young woman with a bionic right hand holds and looks at her phone.
Image: Anna Shvets via Pexels

It starts with a tick box on a form. To assess the potential risk to vulnerable people, my university ethics committee asks whether a research project involves “people with a cognitive impairment, physical impairment, an intellectual disability, or a mental illness.” This category appears alongside those classifying several other types of “risky” bodies, including “Aboriginal and Torres Strait Islander peoples,” “women who are pregnant,” and “people highly dependent on medical care.” I place an x in the appropriate box, and, with one click, distill a vast universe of information, experiences, and subjectivities into a single data point, a single administrative category: disability. In doing so, I perform important boundary work, distinguishing bodies that demand extra attention, care, and scrutiny, from those that are of no special ethical or bureaucratic significance.

This boundary work — that is, the work of delineating social categories and authenticating entry to those categories — is important not only for university bureaucracies, but for governments. Boundary work determines how public resources should be distributed in and by welfare states, as the boundaries help separate those considered to be “deserving” and “undeserving” of social support. With the rise of biometrics-based authentication and algorithmic social sorting, boundary work is becoming increasingly digitalized. In particular, disability classification and assessment is a burgeoning area of experimentation in digital statecraft. Governments are increasingly looking to technology to decide who qualifies as “disabled” — and to pin down the very concept of disability in fixed and measurable ways.

The disability category

In capitalist countries where waged work is the primary means of material subsistence, with poverty relief restricted to the neediest citizens, the bureaucratic state needs a way of distinguishing the “truly needy” from the “idle” poor. The disability category serves an important function here. Disability is one of the few culturally acceptable reasons for non-participation in work, as “genuinely” disabled people cannot be held responsible for their circumstances. Hence, unlike people who simply do not want to work, disabled people are considered to have a legitimate claim to social assistance from the state. The disability category, in other words, bestows certain privileges on people deemed deserving of monetary compensation, social services, and other forms of public aid on the grounds of disability.

Defining the disability category, and controlling its boundaries, is a techno-political problem, as Deborah Stone so powerfully argued in her 1986 book The Disabled State. The medical model posits disability as fixed materially in bodies. By tracing the ways in which states expand and contract the disability category in response to social and fiscal pressures, Stone’s historical analysis challenged the medical model, showing how disability — and the criteria used to determine disability status — function politically to resolve the problem of who should qualify for social aid:

The very notion of disability is fundamental to the architecture of the welfare state; it is something like a keystone that allows the other supporting structures of the welfare system and in some sense the economy at large to remain in place. At the same time the notion of disability is highly problematic. The problem, in brief, is that we are asking the concept of disability to perform a function that it cannot possibly perform. We ask it to resolve the issue of distributive justice (Pages 12–13).

The disability category, Stone argues, helps governments regulate the boundary between work-based and needs-based distributive systems. It serves as a “validating device” — “a test for determining exactly when each distributive system should be operative” (Page 22). By tightening the criteria used to assess disability status, states can control the relative numbers of people in each system and maintain the primacy of work, reducing the fiscal burden of welfare.

Much of the discourse around “welfare cheats,” “fraudsters,” and “dole bludgers” is founded on the presumption that people deceitfully claim disability status to exploit its special privileges and exemptions. Welfare bureaucracies therefore require a validating device free of human bias, or indeed of any undue intervention on the part of those being tested and those doing the testing.

Enter automation

My research has traced the transnational proliferation of algorithmic approaches to disability assessment over the past decade. While governments adopt different instruments to certify, quantify, and classify disability, globally, algorithmic approaches are emerging as a central feature of government efforts to control access to the disability category.

Pre-digitalization, in countries where disability support formed part of the overall social safety net, determining a person’s disability status was the task of social workers and doctors. These professionals, while no doubt prone to paternalism and human bias, had recognized skills and decision-making authority. Over time, the assessment process became increasingly formulaic. Professional discretion was usurped initially by computer-based “decision-support” tools, the purpose of which was to streamline assessments, improve efficiency and, ultimately, curb public spending. The tools were “algorithmic” to the extent that they were defined by process, consisting of “a series of tasks… which professionals could follow like a recipe”. In the United Kingdom, the boundary work of assessing disability was outsourced to tech giants Atos and Maximus. In the United States, some states privatized this function while others developed their own “in-house” disability assessment tools.

Today, algorithmic tools are ubiquitous in all domains of disability provisioning, from mental health to developmental disability, to home care and personal support, to disability benefits and employment services. In each of these domains, individuals are assessed typically through standardized, computer-administered questionnaires that collect data on the nature of their disability, their support needs, and personal and family circumstances including informal care arrangements. Their answers are scored and then summed, with the final figure determining the amount of funding or care hours they receive.

The human costs

So what are the human costs of this algorithmic approach to assessment, in which human discretion and political priorities still mediate the allocation of state resources?

Algorithmic gatekeeping has important distributive effects. Invariably, algorithmic tools operate to restrict access to the disability category so that fewer people can claim a political entitlement to public assistance on the grounds of disability. In Arkansas, for example, the healthcare services tech company Optum was recently contracted to deliver $835 million in Medicaid savings. The company boasted about the 66,600 assessments it performed using a resource allocation algorithm, called interRAI, as a result of which 23 percent of home-care recipients lost support to which they were previously entitled, and 31 percent were denied home care.

Algorithmic gatekeeping also discriminates against racialized and marginalized people with disability. This is clearly the case in Australia, where I have been studying the latest algorithmic techniques used to assess eligibility for disability support services funded by the National Disability Insurance Scheme (NDIS). The latest proposal from the government agency responsible for the scheme was to automate aspects of the assessment process, not by replacing humans with computers, as such, but by imposing on (human) assessors algorithmic methods of need and eligibility assessment. Under the proposed model, dubbed “robo-planning,” an individual assessment is carried out to test functionality across a range of domains (communication and learning, self-care, mobility, etc). In each domain, individuals are given a score, which places them into one of 400 categories or “profiles.” The category to which a person is assigned would indicate the level of funding deemed appropriate, based on a statistical averaging of what people in that category, with those attributes, typically receive. The profiles were created by amalgamating the data of everyone using the NDIS as well as a sample of 4,000 people who took part in a pilot of the proposed system.

The problem with this data-driven approach was immediately obvious: it is impossible to extrapolate from a generalized profile what supports a person needs to flourish in life, especially if that person doesn’t conform to the statistical “norm.” As Elizabeth Kendal et al wrote, differentiating between chronic conditions and disability, “It turns out people do not fit neatly into categories — but these boxes can determine who receives support and who does not.”

Disabled people are a highly heterogeneous group with differing patterns of impairment that intersect with a range of gendered, ethnic, racial, religious, sexual, and class identities. Each of these points of difference adds to the complexity of the disability experience. Women, for instance, experience multiple and highly gendered forms of disability, often intersecting with and manifesting in conditions of chronic illness and pain, violence-related injury, and mental health issues. These types of disability tend to fluctuate over time. Yet for statistical purposes the NDIS requires claimants to nominate a “primary disability” when applying for support, meaning other disabilities, comorbidities, and social factors are not captured in/through data collected on disabled women applicants. A person’s “primary” disability must also be permanent, which many disabling conditions — e.g, depression and anxiety disorders, schizophrenia, cancer — are not. This might explain why women are 26 percent less likely to be deemed “disabled enough” to qualify for NDIS supports: women’s experiences controvert the notion of disability as a biological fixity, a notion which underpins the very logic of algorithmic assessment. As critics of the proposed system have argued, “functionality is not a static thing. It’s a fluid thing.”

Similarly, 24 percent of First Nations Australians live with a disability, yet they make up only 5.7 percent of NDIS support recipients. In part, this is due to the administrative burden of applying for support, which falls disproportionately on groups with the least resources. But it is also to do with the difficulty of capturing disability through discrete data categories. Algorithmic tools tasked with quantifying disability reflect western, medical understandings of disability, where data categories reflect diagnostic and other medically-defined concepts (eg, functionality), and the social and relational origins of disablement are not captured. That is, algorithmic tools omit aspects of disability, such as poverty, that are not easily quantified. This point was made in a recent parliamentary inquiry into the proposed automation of NDIS assessments by the CEO of First Peoples Disability Network:

The expectation is that Aboriginal and Torres Strait Islander people with disability and their families have to fit into the NDIA system and that the system simply does not properly or adequately understand the real lived experience of many Aboriginal and Torres Strait Islander people with disability, which often relates to people living in extreme poverty.

Clearly, algorithmic systems of disability assessment are especially problematic for people whose experience of disability is entangled with, or made worse by, conditions of material hardship, violence, social discrimination, and other structural inequalities. As Tilmes argues, “by removing context and assigning discreet values to disability status, algorithms flatten differences between and within disabled people” (Page 7). This has very serious consequences when people attempt to claim representation (and access to resources) through state data infrastructures that negate their embodied, lived experience of disability.

A mindful approach to disability assessment

The sheer size and complexity of modern welfare states mean that some degree of standardization is required for the sake of accountability and efficiency. Yet what we are moving toward is a system of standardization defined by the imperatives of data and digital classification, rather than the needs of people. These algorithmic tools mindlessly code aspects of disability on the assumption that there is some universal standard of functionality. They are designed to reduce complexity in the way people experience and live with disability, when what is needed is a system that can creatively capture and respond to complexity in ways that place human needs, flourishing and recognition at the center of technological design. This would be a first step towards what Jutta Treviranus calls “designing for the edge.” That is, designing not for the majority but for the “vital few” whose needs and characteristics are exceptional, and therefore typically unrepresented: “working with the vital few [aims] to address their needs from the start… mean[ing] we create a system that can weather contextual changes, does not need to be retrofitted to address excluded needs, and thereby costs less in the long term.”

Current models of disability classification merely rank and score people with disability, typically diminishing their access to social rights and resources. These digital infrastructures are in urgent need of updating, in line with modern understandings of disability as a fluid, nuanced, and socially constructed category — one that is of central importance to the welfare state, and should be treated as such.

--

--

Georgia van Toorn
Data & Society: Points

post-doctoral research fellow | ARC Centre for Automated Decision-Making & Society | disability, technology, neoliberal state formations