Algorithmic Governance: An Emerging Phenomenon, and Your Contributions to the Discussion

Leid Zejnilovic, Zeynep Engin, Karen Yeung, Lauren Maffeo

Data & Policy Blog
Data & Policy Blog
Published in
9 min readMay 20, 2022

--

Data for Policy is an annual global conference offering scholars and practitioners an opportunity to share their research, experience, and evidence to collectively advance knowledge and provoke critical reflection about the impact, implications, and potential of on-going digital transformations within government and the practice of governing more generally. This year, for the first time, the conference will be held in Hong Kong, Seattle, and Bruxelles. The deadline for paper submission is the 1st of June, and the conference dates are December 5–9–13, 2022.

The conference is divided into six Standard Tracks or Areas available in all regions, and six Special Tracks of which some have limited geographical availability for the conference. The same Data for Policy Area Framework is also used to organize submissions to the community’s associated open-access journal, Data & Policy, published by Cambridge University Press. An integrated process is available to those considering contributing to both venues, allowing potential publication in the journal of full papers arising from the conference, following peer-review.

The purpose of this blog post is to offer those who might be interested in contributing to the Standard Track (Area 5) some background information about the core field of interest, namely, Algorithmic Governance.

What contributions would the Data for Policy Area Committee for Algorithmic Governance like to see at the conference?

The phenomenon of algorithmic governance lies at the heart of concepts that may be familiar to many scholars and practitioners, but they might not be familiar with the term itself, or do not self-identify as ‘scholars of algorithmic governance.’ But the term itself is not (yet) a term of art. On the contrary, we are open to a wide variety of different terminologies, theories, methodological approaches, and the range of questions (and forms of evidence) that might be discussed within this broad frame.

What is of more importance is our desire to foster thoughtful, penetrating analysis and reflection about the central phenomenon that we are concerned with, namely, the governance with, by and of algorithms. So understood, themes raised within this track overlap with matters pertaining to the governance of algorithms, as well as raising questions about governance through algorithms.

In other words, we welcome newcomers of all shapes and sizes wishing to engage in cross-disciplinary discussion and reflection about the nature, significance, forms, and impacts of algorithmic governance in society, including their multi-faceted societal implications, which may range widely — from those seeking to understand their technological and societal determinants, the particular forms and impacts in concrete real-world settings, through to their intended and unintended consequences.

You could contribute to the conference with a piece about the diversity of interpretations of Algorithmic Governance, and what it means for different disciplines.

One intuitive way to think about what algorithmic governance is (and what it is not) is by taking the two elements that make the compound, governance plus algorithms. As Kersenbergen and Warden (2004) observe, governance is a ‘veritable growth industry’ that is differently appropriated by different disciplines. In very general terms, governance is the process of steering society and the economy through collective action and in accordance with common goals (Torfing et al. 2012). Building on that definition, algorithmic governance is the study of socio-technical systems in which the governance function (social) is executed in the presence of an algorithmic model (technical). In more practical terms, we may refer to algorithmic governance when the algorithms are designed and deployed to facilitate the performance or perform a set of tasks or functions to coordinate the behavior of others to achieve common goals (Ulbricht and Yeung, 2021). Examples may range from simple parking enforcement to management of complex trading decisions, to life and death decisions in medical scenarios, and to other critical junctures for individuals — such as loan or job applications, or criminal sentencing decisions.

Algorithmic Governance is an emerging phenomenon, of which we need a better understanding, with sound theories and more empirical evidence.

Algorithmic governance offers an enormously valuable lens for facilitating deeper engagement and understanding of an important emerging phenomenon in contemporary society. While research on the topic continues to blossom, the entire field remains in its infancy. In our area description, we highlight a few topics to help potential contributors.

For example, one field of inquiry focuses on the algorithms’ ‘agency’ alongside and in combination with human and institutional decision-making processes. The new types of ‘intelligence’ afforded by AI technologies coupled with data and our increasingly ‘smart’ infrastructure offers previously unimaginable capacity to reform our governance mechanisms and service delivery options, including the delivery of public services. But as public service delivery and decision-making is increasingly assisted or automated via algorithmic systems, fundamental challenges also emerge around assigning both the credit and responsibility around those decisions and the outcomes they produce. While the promise of scalability, impartiality and resource effectiveness has fostered rapid take-up, several high profile failures demonstrate that lack of care and attention to the design and implementation of algorithmic systems for the purposes of governing, often with serious adverse consequences for individuals and the public at large. In other words, algorithmic governance is a complex, cross-disciplinary real-world phenomenon with significant intended and unintended consequences.

What are other cases of algorithmic governance that you have been studying? What can we learn from them?

The overly rushed to deploy systems is an impression that one gets when considering the Australian Centrelink’s Robodebt system case. The system gained notoriety by producing unusually high error rates in matching citizens’ data for the automatic tax returns calculation based on employment income; it became a primer for a digital transformation in government going wrong (Knaus, 2017). That case has surfaced much of the complexity and need for multiple disciplinary insight that is inherent in designing and implementing algorithmic governance in specific domains and contexts. From a legal perspective, it raises questions about the legal significance of software programs that automate decision making, provokes reflection on what constitutes a ‘decision’ for legal purposes, and its conformity with legal duties, the need to provide affected individuals with the opportunity to contest those decisions, and where legal, organisational and moral responsibility for these decisions lies and with what consequences. On what basis were particular choices about the turn to algorithmic governance made, how were trade-offs between competing organisational, legal and cost-driven imperatives made and by whom, and how significance and impact did those choices have and for whom in relaton to what? In Robodebt’s case, the burden of the blame for the errors seems to have been initially carried by the information technology staff who implemented the rules. But it did not take long for questions to be asked about whether the politicians and the executives who were positioned higher up the chain of organisational hierarchy should instead be held responsible rather than the technical developers (Chirgwin, 2017).

As algorithmic governance becomes more prominent, the question of responsibility for poor, if not discriminatory, algorithmic outcomes becomes more important?

The way the fall-out was managed by the Australian government has been symptomatic of what Paul Shelter, head of the Australian government’s digital transformation programme, called the “culture of blame aversion within the bureaucracy” (Knaus, 2017). His comment pithily reveals the intertwining of organizational behavior, organizational culture and leadership, and the management of consequences of governing by algorithms. At the same time, issues of data quality and the data-matching process require significant engineering and managerial expertise, while highlighting the importance of the existence and the availability of national public records and reliable physical infrastructure for citizen service delivery. Beyond engineering, equally critical for the success of the algorithmic systems for governance is human capital, referring to individuals with the skills and expertise to develop and implement such systems. These skills include the understanding of the inner working of information technology and machine learning, where appropriate, but also require the need for sensitivity to the needs and concerns of end-users and others, directly and indirectly, affected, and to acquire a proper understanding of how the system impacts them. In so doing, these systems invariably effect, redistribute burdens and benefits across people and populations, and thereby raise larger questions of justice.

Which technologies help unlock the potential of algorithmic governance?

Algorithmic governance is possible if there is ‘good’ data, the capacity to process the data, and perhaps most importantly, the skills to adapt the technology to the context and operations that could benefit from the use of algorithmic decision-making. There are various technologies that contribute to this process, from those that help data generation, acquisition, availability, and management for fast processing, like the Internet-of-things, cloud computing, and digital twins, to those that allow tamper-proof and secure data storage and immutable execution of tasks, like distributed ledgers (including blockchain), through to those that enable sophisticated but largely incomprehensible data analysis and decision-making, like the artificial intelligence (Engin and Treleaven, 2019). These technologies are at different levels of development and adoption. For example, cloud computing is quite mature and its adoption is high in governments, but blockchain in government is still in its infancy. Which legislative frameworks exist for using these technologies, how do they differ across the world, what innovations are there because of these technologies, and how are these innovations changing the social and organizational structure of the governance? These are only some of the questions related to the use of technology for algorithmic governance which may be of interest for you to address in your conference contributions.

What are the optimal configurations of humans and algorithms for algorithmic governance, and how does the context define these configurations?

There is ample evidence that algorithms can, in many cases, improve and even outperform human decision-making in relation to specific functions in narrowly specified domains(e.g., Kleinberg et al., 2018). Risk assessment tools are, in specific application contexts, allegedly ‘better’ at interpreting complexity understood in terms of more accurate decisions than humans (Wan et al., 2022). But, there are serious concerns that these systems may be discriminating against minority groups in unfair and unacceptable ways (e.g., Angwin et al., 2016). Even when additional information about the algorithm’s outputs, such as explanations, is provided to human decision-makers the human-computer system may not result in ‘optimal’ outputs (Zejnilovic et al., 2021). What evidence do we have about different configurations of humans and computers?

What is the perception of automated decision making?

Furthermore, automated decision-making systems in government are differently perceived by different stakeholders, depending on how they are affected, as in gains or loss of control or power, or to what extent they are represented in the system. For example, Miller and Keiser (2020), informed by the theory of representative bureaucracy, find that passive representation of minorities may condition the attitudes they have toward automated decision-making. Does automation and algorithmic governance, in general, change the perception of the systems of governance and the government? Is there a fundamental change in public perceptions of government and governance associated with these new emerging forms of governance?

And lastly, but equally important, what is good governance?

Particularly salient for algorithmic governance is the question of what represents good governance. Answering this seemingly simple question embroils researchers across disciplines, policy-makers, and citizens into a discussion of how power and resources are being exercised, modulated, and distributed, and also the implications of algorithmic modalities of control and coordination for society, communities, and individuals. This discussion also includes reflections on who participates in the exercise of power and how consensus emerges, how accountability is established, the transparency of the process, the effectiveness and efficiency of the system, whether equity and inclusiveness are being achieved, and what ‘the rule of law’ should demand and how it will manifest when increasing algorithmic agency is embedded into decision-making processes.

The best way for you to join this discussion is to join our Data for Policy Conference!

Acknowledgments: We thank Emily Gardner for her revision of this blog post.

References

Torfing, J., Peters, B.G., Pierre, J., & Sörensen, E. (2012). Interactive governance. Advancing the paradigm. Oxford: Oxford University Press.

Knaus, Christopher, “Centrelink crisis ‘cataclysmic’ says PM’s former head of digital transformation, The Guardian, 2017

Susan M Miller, Lael R Keiser, Representative Bureaucracy and Attitudes Toward Automated Decision Making, Journal of Public Administration Research and Theory, Volume 31, Issue 1, January 2021, Pages 150–165

Ulbricht, L & Yeung, K 2021, ‘Algorithmic regulation: a maturing concept for investigating regulation of and through algorithms’, Regulation & Governance. https://doi.org/10.1111/rego.12437

Kleinberg, J., Lakkaraju, H., Leskovec, J., Ludwig, J., & Mullainathan, S. 2018. Human Decisions and Machine Predictions*. The Quarterly Journal of Economics, 133(1): 237–293.

Engin, Zeynep, and Philip Treleaven, Algorithmic Government: Automating Public Services and Supporting Civil Servants in using Data Science Technologies, The Computer Journal, Volume 62, Issue 3, March 2019, Pages 448–460

Angwin, J., Larson, J., Mattu, S., and Kirchner L. ‘Machine Bias: There’s software used across the country to predict future criminals. And it’s biased against blacks.’, ProPublica, 2016.

Zejnilovic L., Lavado, S., Soares C., Troya I. M. R., Bell A., Ghani R. “Explainable algorithms and human decision-making: a field-intervention in a public employment service” Academy of Management Annual Meeting, Best Papers Proceeding, 2021

Wan, C., Belo, R., Zejnilovic, L., “Explainability’s Gain is Optimality’s Loss? — How Explanations Mislead”, 5th ACM Conference on AI, Ethics, and Society, 2022.

Kersbergen, K.V. and Waarden, F.V. (2004), ‘Governance’ as a bridge between disciplines: Cross-disciplinary inspiration regarding shifts in governance and problems of governability, accountability and legitimacy. European Journal of Political Research, 43: 143–171. https://doi.org/10.1111/j.1475-6765.2004.00149.x

--

--

Data & Policy Blog
Data & Policy Blog

Blog for Data & Policy, an open access journal at CUP (cambridge.org/dap). Eds: Zeynep Engin (Turing), Jon Crowcroft (Cambridge) and Stefaan Verhulst (GovLab)