Loading…
0:00
9:57

If you plug “A.I.” or “artificial intelligence” into Google Image Search, here’s what you get: electric brains firing blue with neurons; translucent robots and robot heads; code dribbling down their foreheads and noses. A monkey that becomes a smartphone user, then a figure in wearable tech, a prosthesis, and a network coming out of their head. “What is A.I.?” asks one cyborg, but never receives an answer from the robot heads, brains, and bodies. A.I. means that we and our robot friends are flying through a world of ones and zeroes, against a black screen of outer space.

The future of… jQuery? Credit: Max Pixel/Public Domain.

You get an image like this: the uncanny-valley woman with Photoshop-perfect features. She’s a vision of a cyborg: part female and part circuit board, with jQuery running in the background. Google’s best guess for the image? “Artificial intelligence operator.” This cyborg illustrates the future in numerous articles, as you’ll see with a Reverse Image Search, including “Rise Of The Machines: BlackRock Turns To Robots To Pick Stocks,”and “Vatican cardinal on a quest for the soul inside the machine.” She even illustrates a ZDNet article about my university’s new AI major that includes courses I teach.

Some Google Image Search options for AI

Google Image Search offers a row of words to help pare down results. After words like robot, alien, computer science, and brain, female and father are back to back, followed by power system, human, and god. “Father” offers images of Alan Turing and John McCarthy. “Female” serves up images of fembots.


“Sometimes in order to see the light, you have to risk the dark.”

Some of this aesthetic was popularized by the movie Minority Report, which presented gestural interfaces on the big screen. Today, intelligent environments are a reality. But why do we depict them as layered and ghosted? “These cultural clichés/touchstones are popular for another reason: It’s really, really hard to talk about digital-reality tech otherwise,” Eric Johnson writes. “These fields are full of jargon, inconsistent in practice and difficult to grok if you haven’t seen all the latest demos; pop culture is a shortcut to a common ideal, a shared vision.”

Sweeping gestural interfaces in Minority Report, with Tom Cruise, designed by John Underkoffler and Oblong

Minority Report was advised by science advisor John Underkoffler, founder and CEO of Oblong, which builds immersive human machine interface (HMI) platforms combining screens of different scales and different modes of interaction. It’s something he’s worked on for nearly 30 years, starting with his master’s thesis at the MIT Media Lab on holograms and photographic reality in 1991, in which he investigated “the development of new techniques for calculating holographic interference patterns of objects and scenes with realistic visual characteristics.” This research was part of the MIT Holographic Video project — an agenda that set over a decade earlier in the late 1970s, when Nicholas Negroponte and researchers at MIT’s Architecture Machine Group (the predecessor to the Media Lab) developed simulation environments that were intended to be indistinguishable from reality. In 1978, Negroponte and colleagues wrote in a proposal, “We are reminded of the prompt from Bell. It is the next best thing to being there. This proposal is about being there.”


They thought we’d be there in 1978. Forty years later, we are there.

Or there’s this vision. Theodore Twombly (Joaquin Phoenix) moves through a rose-tinted world in the movie Her. By day, he works for a company that writes letters for people who can’t emotionally muster it — himself an A.I. for people’s emotional worlds. He is dating Samantha, his intelligent operating system. Theodore moves through an urban world, having deeply connected conversations with Samantha yet remaining disconnected from the people around him. He is attuned to a voice that only he can hear, but who can be the same voice for thousands of others at the same time. Is he affected by the strange virtuality of his love, or avoiding the difficulties of relating to a real person, or both? The movie world of Her is color-corrected like gauzy Instagram photos of Coachella fans, an interface of image in which nothing is not a computer.

How do you communicate what you don’t understand?

The problem is that it’s hard to communicate clearly about A.I. — in part because communicating about it means understanding it. And most of us don’t have a clear understanding of what A.I. actually is. The term “artificial intelligence” has been around since 1955, when A.I. pioneer John McCarthy wrote that A.I. was a matter of “making machines do things that would require intelligence if done by man.” That idea hasn’t changed much today — Wikipedia contrasts artificial intelligence (or machine intelligence) with the natural intelligence of humans and animals, and the Oxford English Dictionary defines it as “the capacity of computers or other machines to exhibit or simulate intelligent behaviour; the field of study concerned with this. Abbreviated A.I.” But what does that actually mean to an everyday person? Does it mean chatbots passing a Turing Test? A robot uprising? Or is it simply interactivity where the processing happens just out of sight?

A.I. is a black box, “a device which performs intricate functions but whose internal mechanism may not readily be inspected or understood” (OED)—something we understand because of the inputs and the outputs. We can’t see what happens inside and we’re not meant to have access to it. The black box is opaque.

Today, there are three reasons for algorithmic opacity, as Jenna Burrell writes: the need to protect algorithms that are state or corporate secrets; the fact that A.I.-related coding is still the territory of specialists; and a “mismatch” between the mathematical ways algorithms process information and the way humans think. This last one is the hardest to sort out: how humans think is different than how the machine thinks, and how we reason is different than how the machine reasons (or doesn’t reason, depending on your definition of “reason”).

The European Union is requiring that A.I. be made explainable through a “right to explanation” as part of the GDPR (General Data Protection Regulation) that just took effect. EU citizens have a right to an explanation for the work of algorithms, and they have a right to request human intervention. The regulations support the idea that “people are owed agency and understanding when they’re faced by machine-made decisions,” Cliff Kuang wrote in the New York Times. David Weinberger argues that rather than explaining itself, one focus of A.I. should be optimization, not explanation: make visible and clear to all through debate in public policy what trade-offs are being made, rather than potentially hobbling A.I.

The US government has never been known for well-designed PowerPoint decks

In the meantime, DARPA has introduced the Explainable Artificial Intelligence program, which seeks to make the models behind machine learning and A.I. more explainable. It’s an important move toward understanding what we mean when we talk about A.I., and yet I wonder what it will actually achieve for everyday people. The DARPA page for the project serves up this image showing what a user might be asking, but not in any quick way that’s going affect how it shows up in the world. Perhaps not surprisingly, DARPA returns to the same clichés as the Google Image Searches I mentioned above.


We need new clichés.

We encounter A.I. in the world around us. We see and read about A.I.’s applications in many different spheres. In my Pittsburgh neighborhood, where Argo and Uber ATG have their headquarters, autonomous vehicles pass me by so frequently they no longer register as out of the ordinary. Near the running trail by the river, I pass autonomous excavators and bulldozers. These are visible applications we could draw understanding from. And then we need to do the hard work of showing what the less visible applications of A.I. look like. I’m excited about the work of this group convening a workshop on visualizations for AI explainability. Many of the examples they point to in their workshop announcement are about teaching A.I. (here’s a beautiful example from a few years back). What might we design for everyday people?

To be sure, we don’t want to lose the elegance, simplicity, and even magic of intelligent interaction — we value these attributes in good design. When we’re interacting with something intelligent, we want it to open up ways for us to see and experience what technology can do. We want to experience the magic. And in developing that magic, it’s easy to fall into the Hollywood clichés I’ve mentioned. It’s hard not to. How do you make visible something that happens out of sight?

Our pop culture visions of A.I. are not helping us. In fact, they’re hurting us. They’re decades out of date. And to make matters worse, we keep using the old clichés in order to talk about emerging technologies today. They make it harder for us to understand A.I. — what it is, what it isn’t, and what impact it will have on our lives. When we don’t understand A.I., then we don’t understand the power differentials at play. We won’t learn to ask questions that could lead to better A.I. in the future—and better clichés today. Let’s lay the ghosts and cyborgs to rest and find a real way to communicate about A.I.