On Paying Attention: How to Think about Bots as Social Actors

Is a speed bump a bot?

m.c. elish
Data & Society: Points
6 min readFeb 26, 2016

--

CC BY-NC-ND 2.0-licensed photo by Steve Hanna.

Every hour, @censusAmericans inserts a snapshot of an American into the Twitter stream. Based on anonymized census data from 2009 to 2013, the bot creates a mini-biography and provides a way to picture individual people within the immense dataset. As the bot’s creator, Jia Zhang, writes, “Just a few descriptors — how much they work, whom they take care of, where they were born — can give us a sense of the people around us.” The bot will continue to automatically produce these snapshots until the end of the dataset is reached, which will occur in about 1,760 years.

CensusAmericans is a simple and evocative bot. It makes us think about the strangers we pass on the street, the neighbors we know and don’t know, or the people we fly above as we travel from city to city. We hopefully take notice of what’s around us in a new way.

There are other examples of such Twitter bots: @Congressedits tweets anonymous Wikipedia edits that are made from IP addresses in the US Congress. An ill-fated bot was created by anti-abortion activists to generate and tweet names of aborted fetuses. A set of bots was created by Al Jazeera America to keep track of candidates on certain issues during the 2014 election. Of course, not all bots are benign. Bots can be used to silence dissent, create undue influence in elections, and carry out cyber-attacks. According to one study, more than half of Internet traffic is bots.

Although just a series of automated tasks, bots can be a powerful force in shaping behavior or changing a state of affairs in the world.

During this workshop our focus was to interrogate the nature of this force. What are bots actually doing? How do they do it? How do we understand the relationships between bots and their creators? As an anthropologist, I would argue that if we want to take seriously the social implications of bots, we must think about why they are unique social actors — and how they are versions of more familiar actors.

Consider, for instance, a yellow speed bump. Decades of living with increasing numbers of motor vehicles has taught us that people need to be reminded to drive carefully, and sometimes the law isn’t enough. While a stop sign is embedded within our traffic laws (and is thus a different kind of actor), a speed bump is not a legal actor but rather a manipulative actor: “Slow down or you’ll damage your car!” We can’t have crossing guards or even stop signs everywhere. So we have speed bumps to change our behavior. Like the definition of bots above, speed bumps, also known as “sleeping policemen,” perform a task that is simple, repetitive, and more resource-efficient than a human.

But speed bumps, as social actors, are exhibitions of condensed networks of culture and power. They are physical objects as well as instantiations of specific social processes, like wrangling financial resources and political capital. Speed bumps don’t just sprout out of the ground. They require capital — financial and social — to be brought into existence. In this way, as philosopher Bruno Latour has pointed out, speed bumps are not just concrete and paint, but complex (and often contested) manifestations of human will, from the engineers to the regulators to the residents.

The construction of speed bumps are frequently sites of contested power. Affluent communities tend to have more speed bumps built, and more quickly, than less affluent neighborhoods. Speed bumps are political, not just because they involve local municipalities, but also because they involve behavior in the public sphere. When we think about a speed bump as a social actor, we can see all the ways in which non-human actors become significant extensions of human intention.

CC BY-SA 2.0-licensed photo by Andrew Bowden.

Is a bot a speed bump?

How does our perspective change if we think about bots as a kind of speed bump?

Speed bumps are social actors in service to a public good, keeping people safe. They change people’s behavior. Bots, too, can shape behavior and influence how people make decisions. And the manipulation of behavior is not necessarily a bad thing. Bots also are generally created to act upon someone or something else, to intervene or change a state of affairs.

While both act in public — the speed bump on shared roads, bots on the Internet — the processes of creating bots and bumps are significantly different. Only certain institutions are authorized to build speed bumps. This may be a city government or a housing board if the road is private. Any ordinary citizen is not allowed to build a speed bump on their road; even if they do have means to set the cement, the bump can be removed or, without continued maintenance, deteriorate and wear away.

In contrast, a bot can be made by anyone, theoretically. The means to produce a bot are available to anyone with a moderate amount of coding knowledge. This opens exciting possibilities about the democratization of action and speech on the Internet. And yet, in practice, we have seen that not all bots are created equal. Bots may not require institutional authorization, but they are subject to institutional power dynamics. And bots with big money or powerful governments behind them will have more resources with wider impact.

There is also the matter of deception to consider. Speed bumps announce themselves as speed bumps with bright yellow paint. Advertisements, also intended to manipulate behavior, are in fact required to announce themselves. By law, an advertisement is not allowed to pass itself off as some other kind of content. Advertisers and their clients still seem to think that the work of advertisements is effective.

What is unique about bots is that they do not necessarily announce themselves as bots, and they also seem to mask human will and intention. Even if the creator behind a bot cannot predict everything it will do, it is important to keep in mind that, in the bot’s inception, a human decided what kind of data to use and how that data could be used in the bot’s program. Still, like speed bumps, bots operate in the world apart from their creator, and their actions are likely a compounding of multiple human intentions.

The days of the Terminator or R2-D2, let alone Replicants, are not yet upon us. When we look at new forms of automated or “intelligent” actors, we need to pay attention to how the humans who used to be in the picture have been moved just outside the frame. At the same time, we also need to take seriously the agency of these non-human actors. Especially for bots created to manipulate human behavior, to change a current state of affairs, we need to weigh carefully the consequences of allowing bots to hide in plain sight.

Special thanks to Robyn Caplan, Sam Woolley, and the participants of the Data & Society bots workshop for contributing to the development of the ideas presented here. — MCE

Points/talking bots: “On Paying Attention: How to Think about Bots as Social Actors” is an output of a weeklong workshop at Data & Society that was led by “Provocateur-in-Residence” Sam Woolley and brought together a group of experts to get a better grip on the questions that bots raise. More posts from workshop participants talking bots:

--

--

m.c. elish
Data & Society: Points

ph.d. anthropologist of robots, work and AI; research lead @ data & society