Back Stage at the Machine Theater

A look at the theatrics guiding user interaction

Tim Hwang
re:form

--

By Karen Levy and Tim Hwang

Here’s a speculation of science fiction that is rapidly manifesting into a real nuts-and-bolts design debate with wide-ranging implications: should self-driving cars have steering wheels?

The corporate battle lines are already being drawn on this particular issue. Google announced its autonomous car prototype last year, drawing much attention for its complete absence of a steering wheel. The reason for this radical departure? The car simply “didn’t need them.

That is not an opinion shared among everyone racing to build commercially available self-driving automobiles. At CES this past January, Mercedes showcased its F015 autonomous prototype with a steering wheel, observing that it was still needed in “occasions when the driver wants to drive.”

But, the luxury (and boredom) of manual driving isn’t the only reason that Mercedes — alongside traditional car manufacturers like Audi and GM — is keeping the steering wheel in their self-driving car. These companies have followed a gradualist approach towards autonomy for years, facilitating the replacement of the human driver in piecemeal steps rather than in a single radical change. This includes the introduction of technologies like adaptive cruise control, but also more recently features systems like parking assist and automatic braking. The result has been a campaign that “slowly but surely break[s] down…potential consumer concerns while slowly but surely vetting the technology.

For now, regulators have taken action to resolve this specific debate. By decision of the California DMV, companies will have to include a wheel when testing their vehicles on public roads to allow for “immediate manual control.” Other states are expected to follow suit.

Take a step back: a steering wheel implies a need to steer, something that the autonomous car is designed specifically to eliminate. In a near future of safe autonomous driving technologies, the purpose of the steering wheel is largely talismanic. More than actually serving any practical function, the steering wheel seems bound to become a mere comfort blanket to assuage the fears of the driver.

This is a classic problem. Consumers refuse to adopt a new technology if it visibly disempowers them or departs radically from trusted patterns of practice. This is the case even when the system is better at a task than a human operator — as in the case of the self-driving car, which is safer than a human driver.

The solution? Allow the interface of your technology to engage in a form of design theater.

An object can assume the form of something familiar and accepted, in line with the status quo, even as the real capabilities of the technology undergo radical change. After all, the truly functionless steering wheel in the autonomous car would simply replace a steering wheel that already keeps up the myth that it is connected to physical mechanisms within the vehicle, rather than providing a digital input to a series of computers.

These design theaters abound in technologies both new and old. Sometimes, theaters help guide us about how to use a (physical or digital) system. Skeuomorphs are design features that evoke past iterations of the same artifact to help us figure out how to use something, or to help us to understand its function — an e-reader with “pages” that “turn,” the shutter click on a digital camera, a cigarette filter printed to look like it’s made from cork. These features can give a new system temporal continuity by retaining previously functional design characteristics as ornaments; they’re a nod to past design and the cultural legacies of technology use.

Image courtesy of Wikipedia

Other design theaters are aimed not at providing direct usability cues, but at smoothing technologies’ entry into social life by increasing their acceptability. An early example is the Horsey Horseless, an 1899 vehicle design intended to coexist with horse-drawn carriages. Horses were spooked by the strange new cars on the road; the Horsey Horseless was, essentially, “a car with a big wooden horse head stuck on the front of it,” which doubled as a fuel tank. It’s not clear that the Horsey Horseless was ever produced, nor that it would have worked as planned, but its intentions were clear — to present a misleading social cue (to horses!) that would help make this new contraption less scary and easier to live with.

“Placebo buttons” — buttons that look functional but actually have control over nothing — are a great example of another flavor of design theater: the theater of volition, which leads the user to believe that she has more control over a system than she actually does. Theaters of volition are everywhere: at crosswalks, in elevators, and even in your office thermostat.

What’s the point of these nonfunctional design features? They make users more comfortable with a new technology by giving us a sense of control, even if that sense is ultimately illusory.

The oft-cited urban legend that instant cake manufacturers had to add an unnecessary step — the addition of an egg — in order for housewives of the 1950s to feel like they were still “cooking” pays homage to exactly this type of theater in the design of objects. Similarly, in neurological experiments, animals show fewer signs of stress response to an uncomfortable situation if they’re given a lever to push so they feel like they’re in control — even if that lever isn’t actually connected to anything. Sometimes, we need the illusion of being in command of something to feel comfortable with it — and new technologies are no exception.

Theaters abound in the digital as much as they do in the physical. While Facebook and Google exert a powerful influence in mediating the content that you receive, they engage in the theater of the passive. By design, these systems appear as compliant agents, merely responding to queries and browsing without revealing the data they collect and the extent to which an algorithm influences the experience of a site. The fact that these technologies create “filter bubbles” which mask as much as they reveal is left unexposed to the user.

How should we think about the various forms of design theater? The answer might lie not in the contemporary world of machine learning algorithms and robotics — but in mid-century sociological theory. Erving Goffman was one of the preeminent social theorists of the modern era; his work made huge contributions to how we understand how people communicate with one another day-to-day. In 1959, Goffman published his best-known book, The Presentation of Self in Everyday Life, which introduced the concept of dramaturgical analysis of social exchange. Goffman suggested that our social interactions can be thought of in the same terms as theatrical performances, in which there’s both a performance on the “front stage,” geared toward a particular audience, as well as a “back stage” that isn’t readily observable. Very often, front stage performances involve misrepresentations or false impressions — some of which stem from malicious intent, while others are “white lies” intended to save others’ feelings. It’s bad social form (and illegal) to pretend to be a police officer; it’s considered polite to pretend to enjoy a friend’s band.

Goffman’s theory was concerned with human-to-human interactions — how people behave and portray themselves to one another in everyday life. But Goffman’s dramaturgy holds some lessons for how we think about and understand machines, too. Just as people do, autonomous systems present themselves in certain ways, for socially strategic purposes. For machines, we might think of the back stage as the “guts” of the system — the physical and algorithmic structures that determine its underlying capabilities. The front stage is the machine’s dramaturgical performance, aimed at smoothing its social interactions with humans.

A machine’s front stage performance gets enacted through design. Just as a human provides front stage cues through her appearance and behavior (for instance, by talking with a certain degree of formality, or wearing a uniform), design provides signals for how the people around a machine should understand and interact with it. Sometimes these cues are relatively forthright: press this button to start, plug me in here. But just as humans can provide social cues that mislead others about their “true” nature, the design of a system or artifact can invoke deception: a machine, like a person, can lie, omit, or mislead.

“Design lies” can serve a number of purposes, and aren’t necessarily nefarious — sometimes they’re just about making social life possible.

Image courtesy of Wikipedia

How should we think about the ethics of design theater? Our initial reaction might be that misleading consumers about the nature of a technology is always wrong. In lots of areas, we enforce the idea that people have a right to know what they’re buying (consider rules about honest packaging and labeling, from knowing what ingredients are in our food to being informed about the possible health consequences of exposure to certain substances). But just as humans’ front stage performances are necessary for social life to function, it’s important for technologies to integrate into social life in ways that make them usable and understandable. Though some designers find skeuomorphism ugly or aesthetically inauthentic, it’s tough to find a serious ethical problem with a design feature that’s genuinely intended to guide usability.

There also doesn’t seem to be a tremendous ethical problem with theaters designed for certain laudable social purposes, like safety and protection. Nothing makes this clearer than artificial engine noise. Because modern electric cars are so much quieter than their internal-combustion predecessors, it’s much harder for pedestrians to hear them approaching. Since we’re used to listening for engine noise as a safety cue, a silent vehicle can more readily “sneak up” on us and cause accidents. Over time, if all vehicles become silent, many of us would no doubt lose this subconscious reliance — but the consequences of losing the cue altogether can be very dangerous in the shorter term, especially for pedestrians with visual impairments. (In fact, the National Highway Traffic Safety Administration considered requiring hybrid and electric vehicles to play back a recording based on the noise made by an internal combustion engine; regulations about exactly how these quiet cars should audibly alert pedestrians about their presence are currently pending.) Law professor Ryan Calo describes these design theaters as a form of “visceral notice” intended to protect and empower people. We might understand these theaters as forms of socially benevolent deception — design lies that ultimately serve positive social ends.

But there are cultural aspects to design theaters like these, too, which can lead some users to feel unduly deceived. For instance, some muscle cars layer synthetic engine noises onto (quieter) real ones, in a nod to the nostalgic preferences of aficionados; Ford had Mustang fan clubs help it select the right aural mix for its newer models. But some die-hard fans are offended by what they see as a “lip sync” by car manufacturers: “The bellowing roar of a tire-roasting fastback on the brink is the stuff of which kids dream…Sound is part of the appeal of motoring, and when it’s not entirely real or honest, something’s lost.” Says another: “For a car guy, it’s literally music to hear that thing rumble…It’s a mind-trick. It’s something it’s not. And no one wants to be deceived.”

Theaters of volition pose the thorniest ethical questions. On one hand, we could posit that there are social benefits to the proliferation of certain technologies in society, and without certain misleading front-stage presentations, these systems wouldn’t get the social traction they need. We might all be safer and more productive if self-driving cars become the norm, and if we need a little design “help” to get there, so be it. But of course, this kind of justification runs a strong risk of paternalism and opacity.

Image courtesy of Wikipedia

Volitional theaters can be attractive opportunities for system creators to manipulate behavior, as well. As described by Natasha Dow Schüll in her ethnography of the gambling industry, Addiction by Design, designers of machine gambling have actively toyed with visual representations of slot reels to make the probabilities of winning seem larger than they actually are. Sometimes, digital slot machine will present a high number of “near misses” to encourage players to keep playing. By presenting an interface that merely appears to behave like a traditional slot machine, the system engages in a theater that assures the gambler that they have more control over the probabilities of the game than they do.

Objects and algorithms are inescapably political and their designs represent policy choices that can be very difficult to identify and interrogate. They are, sometimes literally, black boxes — so we have no choice but to rely on front-stage cues to figure out “who they are.” When these cues affirmatively misrepresent themselves to us, it can be impossible to have informed conversations about how these machines fit into social life, and how we should build policies around them.

Machines are likely to become increasingly dramaturgical, and the theater of volition will be perhaps the most popular performance in the near future.

The reason for this is simple: our technologies are increasingly intelligent and proactive. From the algorithmic outputs of a search engine or social network to the physical robotics that eliminate the need for a human operator, systems can and will outpace human proficiency.

As commercial products, companies will be forced to find ways of driving adoption even as these systems claim ever larger proportions of control and knowledge over the relationship between user and object. More often than not, the solution will be to ensure that these technologies perform the rituals of less proactive, less complex, and less powerful machines. They will reassure users of their understanding and influence even as these systems may strip them away.

Given these financial incentives, the upshot is a world in which we must think critically not only about the back-stage mechanics of the technology itself, but the types of theater that are permissible, as well. In the case of the autonomous vehicle, perfecting the algorithm to be safe and effective for the passenger is only the first challenge. Goffman presents the second challenge — to permit and distinguish benevolent deceptions from the maladaptive and manipulative ones. As these backstage technical challenges continue to be resolved in systems across a range of different arenas, the battles will increasingly rage around their presentation in everyday life.

--

--

Tim Hwang
re:form

i’ve got mass communication / i’m the human corporation