AI and the Administration of Welfare

Lizzie Hughes
surveillance and society
3 min readNov 9, 2023

--

In this post, Mike Zajko reflects on his piece ‘Automated Government Benefits and Welfare Surveillance’ which appeared in the 21(3) issue of Surveillance & Society.

Image from Unsplash.

My article, ‘Automated Government Benefits and Welfare Surveillance in Surveillance & Society’s special issue on AI & Surveillance, started off as a paper for the 2022 SSN conference in Rotterdam, and drew on my ongoing research into AI and digital public services. Several years ago Virginia Eubanks wrote an influential book about Automating Inequality, and Philip Alston warned we were “stumbling zombie-like into a digital welfare dystopia”. Back in 2018–19, these voices were concerned about the use of digital systems and algorithms to administer the welfare state, and it seemed that AI was just on the verge of becoming the next big thing in public administration. I wanted to study to what extent this had actually happened, or to what extent the use of AI had really changed things in government.

I’ve been interested in government adoption of AI (primarily in Canada) since 2018, coinciding with a wave of excitement and concern about these technologies in government and beyond. One of the problems with this “AI hype” has been the variety of phenomena being labelled as AI, when some may be relatively simple decision-making algorithms or government IT systems. Vendors brand products as AI to make them sound more advanced, while critics of automation use the AI label to draw on dystopian imaginaries of machine control. Both forms of AI hype are bolstered by the fact that the actual technologies can be quite opaque, not necessarily because of their complexity, but because these are shrouded by corporate or government secrecy.

These days, when people talk about “AI”, they are often referring to systems based on large language models (like ChatGPT), and more basic chatbots have been one of the leading uses of AI in public services for some time. Most fundamentally, these are technologies based on “machine learning”, which has also become widely deploying in auditing and automated fraud classification. I chose to write largely about the Netherlands, where there has been a lot of discussion of problems with fraud classification as part of welfare surveillance in recent years, but my objective with this piece has been to link the high-tech frontier of the so-called digital welfare state to a longer history of welfare surveillance, scrutiny, and suspicion of those in greatest need.

Governments have been somewhat cautious in deploying machine-learning based technologies to make decisions about people’s lives. However, these technologies work best in domains where there are large volumes of data to work with and some perceived need to identify hidden patterns, which is particularly the case for investigations of “welfare fraud” — a term that suggests criminality, but can instead be loosely applied whenever someone receives benefits in a way that a government agency thinks is wrong. Because people receiving government assistance are treated with greater suspicion and less concern over privacy, they are particularly susceptible to being punished by an unaccountable algorithm. This has resulted in numerous scandals causing great harm to thousands of people at a time. Whether or not some form of AI is involved, we are likely to see similar problems with digital welfare surveillance programs in the future.

--

--

Lizzie Hughes
surveillance and society

Associate Member Representative, Surveillance Studies Network