The “Problem of Many Hands”: Part 1

Is it a useful concept in AI Governance?

Sebastien Dery
3 min readJan 5, 2023

I’m practicing my writing in a more academic style. I came across the “Problem of Many Hands” in a few paper on AI Ethics, a few of them leaning on this concept as a source of confusion in governance. My TLDR take is that this view leans too much into abstraction and forgets that AI systems are software designed, built and released by people and entities. The chain of accountability is thus clear by their nature.

The concept of “many hands” (PMH) has emerged as a central theme in debates about the governance and control of artificial intelligence (AI). According to the PMH framework, the development and deployment of AI systems involve a complex network of actors, including governments, businesses, research institutions, and civil society organizations, among others. These actors are said to have diverse and often conflicting interests, and the challenge of AI governance is to find ways to coordinate and harmonize their efforts. Who is responsible when AI fails us?

While the PMH framework has been useful in highlighting the complexity of the AI landscape, I argue that it is based on a flawed premise: the idea that AI systems are fundamentally different from other types of software and therefore require a dramatically different governance structure.

I first seeks to challenge this assumption by arguing that the PMH framework is based on a category error leading to a false dichotomy between centralized and decentralized control.

The Language of AI

I will assume, and holds throughout the essay as true, that AI technologies do not currently meet any criteria of agency and hence preconditions for responsibility. This is a position that many in AI safety research also currently hold (Yudkowsky, 2014). I think this position is both well supported, and important in reframing the problem of many hands. To see why, we must pay attention to the language we use around AI.

Consider the following two sentences:

  • 1. “The AI system made a mistake.”
  • 2. “The programmer made a mistake in the AI system.”

Which of these reflects more reality? If we think that AI systems are autonomous agents, then Sentence 1 would be the appropriate choice. If we think that AI systems are pieces of software, then Sentence 2 would be more accurate.

In other words, the first sentence implies agency on the part of the AI system. This colloquialism can be found in many subtle places. For example, when an AI system is said to “learn” or “decide” on its own, this anthropomorphism implies agency. When we refer to a solution as “AI-driven” we attribute a form of agency.

Another common place this language appears is in discussions of explainability and transparency.

Consider the following:

  • 1. “The AI system is not transparent/explainable.”
  • 2. “We don’t understand how the AI system works.”

The first sentence not only implies agency on the part of the AI system but in both sentences we are blind to the decision of using AI as an integral part of the system. This leads us to easily forget that the use of AI technologies is not an inevitability but a design choice subject to constraint and forces that we could legitimately want to hold accountable.

The language of autonomy and agency is not just a figure of speech. How we think about algorithms matters because it affects how we regulate them. If we conceptualize algorithms as agents, then it follows that they should be treated like people for purposes of the law. Why does this linguistic choice matter? Because treating an algorithm as an autonomous agent naturalizes its power and forecloses critical analysis of why the algorithm was created and what interests it furthers. This way of talking about algorithms also elides the fact that they are created by people to achieve specific goals.

The first step in critically evaluating the problem of many hands is to avoid language which implies agency on the part of AI systems. In Part 2, I will discuss AI systems as pieces of software and make no claims about their agency. We’ll explore the implications of this perspective for the governance and development of AI.

--

--

Sebastien Dery

Canadian in Silicon Valley, ML @ Apple #Philosophy #StoneSculptor #Divemaster