Society-in-the-Loop

Programming the Algorithmic Social Contract

Iyad Rahwan
MIT MEDIA LAB
Published in
6 min readAug 12, 2016

--

Update (July 24, 2017): An expanded version of this article will appear in a special issue of “Ethics and Information Technology” on “AI & Ethics”. A free pre-print can be found here.

director recently published a thoughtful essay titled “Society-in-the-Loop Artificial Intelligence,” and has kindly credited me with coining the term. Now that it is out there, I wanted to elaborate a little on what I mean by “society in the loop,” and to highlight the gap that it bridges between the humanities and computing.

Human-in-the-Loop

What I call “society in the loop” is a scaled up version of an old idea that puts the “human in the loop” (HITL) of automated systems. In HITL systems, a human operator is a crucial component of a control system, handling challenging tasks of supervision, exception control, optimization and maintenance. Entire fields of study focus on how to best engineer HITL systems to optimize performance, making the most of human judgment while avoiding human limitation, such as susceptibility to information overload, or systematic cognitive bias.

Recently, a number of articles have been written about the importance of applying HITL thinking to Artificial Intelligence (AI) and machine learning systems (e.g. see this O’Reilly report for a few examples). HITL AI has been going on for a while. For example, many apps learn from your behavior in order to improve their ability to serve you better (e.g. by predicting the next word you’re going to type). Similarly, when you mark an email as “spam” in Gmail, you are one of many humans in the loop of a complex machine learning algorithm (specifically an active learning system), helping it in its continuous quest to improve email classification as spam or non-spam.

Of course, the importance of HITL is not limited to improving email spam filtering or other classification tasks. Various forms of HITL can play a crucial role in domains ranging from medical diagnosis to robotic warfare. And engineering those systems is a non-trivial task.

Society-in-the-Loop of Governance Algorithms

What happens when an AI system does not serve a narrow, well-defined function, but a broad function with wide societal implications? Consider an AI algorithm that controls billions a self-driving cars; or a set of news filtering algorithms that influence the political beliefs and preferences of billions of citizens; or algorithms that mediate the allocation of resources and labor in an entire economy. What is the HITL equivalent of these governance algorithms? This is where we make the qualitative shift from HITL to society in the loop (SITL).

While HITL AI is about embedding the judgment of individual humans or groups in the optimization of narrowly defined AI systems, SITL is about embedding the judgment of society, as a whole, in the algorithmic governance of societal outcomes. In other words, SITL is more akin to the interaction between a government and a governed citizenry. Modern government is the outcome of an implicit agreement — or social contract— between the ruled and their rulers, aimed at fulfilling the general will of citizens. Similarly, SITL can be conceived as an attempt to embed the general will into an algorithmic social contract.

In human-based government, citizens use various channels — e.g. democratic voting, opinion polls, civil society institutions, social media — to articulate their expectations to the government. Meanwhile, the government, through its bureaucracy and various branches undertakes the function of governing, and is ultimately evaluated by the citizenry. Modern societies are (in theory) SITL human-based governance machines. And some of those machines are better programmed than others.

Similarly, as more and more governance functions get encoded into AI algorithms, we need to create channels between human values and governance algorithms.

The Algorithmic Social Contract

To implement SITL, we need to know what types of behaviors people expect from AI, and to enable policy-makers and the public to articulate these expectations (goals, ethics, norms, social contract) to machines. To close the loop, we also need new metrics and methods to evaluate AI behavior against quantifiable human values. In other words: We need to build new tools to enable society to program, debug, and monitor the algorithmic social contract between humans and governance algorithms.

Implementing SITL control in governance algorithms poses a number of difficulties. First, some of these algorithms generate what economists refer to as negative externalities — costs incurred by third parties not involved in the decision. For example, if autonomous vehicle algorithms over-prioritize the safety of passengers — who own them or pay to use them — they may disproportionately increase the risk borne by pedestrians. Quantifying these kinds of externalities is not always straightforward, especially when they occur as a consequence of long, indirect causal chains.

Another difficulty with implementing SITL is that governing algorithms often implement implicit tradeoffs. Human expert-based governance already implements tradeoffs. For example, reducing the speed limit on a road reduces the utility of drivers who want to get home quickly, while increasing the overall safety of drivers and pedestrians. It is possible to completely eliminate accidents — by reducing the speed limit to zero and banning cars — but this would also eliminate the utility of driving, and regulators attempt to strike a balance that society is comfortable with through a constant learning process. Citizens need means to articulate their expectations to governance algorithms, as they do with human regulators.

The SITL Formula

To summarize, building SITL systems requires implementing two distinct processes. First, just like HITL, it requires human oversight over decisions-making made by algorithmic and data-driven systems. Second, unlike HITL, SITL also requires negotiation and enforcement of tradeoffs between the goals of different stakeholders in society. That is:

Society-in-the-Loop = Human-in-the-Loop + Social Contract

or in short:

SITL = HITL + SC

I find this high-level description attractive because it abstracts away from the discussion of which values should be universal, and which should be culture-specific. It simply states that governance algorithms must be managed in the same way we manage our relationship with government — which means that all of the conceptual tools from social contract theory and political philosophy can come in handy.

The SITL Gap

Why are we not there yet? There has been a flurry of thoughtful treaties on the social and legal challenges posed by the opaque algorithms that permeate and govern our lives. The most prominent of those include Frank Pasquale’s The Black Box Society, and Eli Pariser’s The Filter Bubble. While these writings help illuminate many of the challenges, they often fall short on solutions. This is because we still lack mechanisms for articulating societal expectations (e.g. ethics, norms, legal principles) in ways that machines can understand. We also lack a comprehensive set of mechanisms for scrutinizing the behavior of governing algorithms against precise expectations. This gap is illustrated in the figure below. Putting the society in the loop requires us to bridge the gap between the humanities and computing.

An important component of this picture is that both human values and AI are ongoing constant co-evolution — something that Danny Hillis alerted me to. Thus, the evolution of technical capability can irreversibly alter what society considers acceptable — think of how privacy norms have changed because of the utility provided by smart phones and the Internet.

The Way Ahead

An increasing number of researchers from both the humanities and computer science have recognized the SITL gap, and are undertaking concerted efforts to bridge it. These include novel methods for quantifying algorithmic discrimination, approaches to quantify bias in news filtering algorithms, surveys that elicit the public’s moral expectations from machines, means for specifying acceptable privacy-utility tradeoffs, and so on.

The Age of Enlightenment marked humanity’s transition towards the modern social contract. Narrowing the SITL gap may bring humanity closer to realizing a new, algorithmic social contract.

--

--

Iyad Rahwan
MIT MEDIA LAB

Associate Professor @MIT @medialab | Director of @scalablecoop | Computational Social Science | Ethics | Crowdsourcing | Artificial Intelligence |