Don’t Call AI “Magic”

m.c. elish
Data & Society: Points
5 min readJan 17, 2018

[Portions of this post are drawn from a longer article by M.C. Elish and danah boyd, “Situating Methods in the Magic of Big Data and AI,” available at SSRN (open pre-print) or Communication Monographs. — Ed.]

Image via Flickr

AI was the star of the show at last week’s CES convention in Las Vegas. From personal assistants like Alexa, to smart appliances, to driverless cars, and even to self-driving suitcases, AI-driven technologies captured the spotlight. According to the New York Times at CES, AI is “the magic that is making hardware evolve.” In the future, we were promised, we’ll have shopping experiences that feel like magic, and meetings “scheduled like magic” by virtual assistants.

Don’t believe in magic.

Sparkling, spotless, and new, AI technologies — like some of those that were presented last week — promise a future that is scientifically perfectible and controllable. But perceptions and expectations of AI and systems driven by machine learning have become unmoored from reality. Those who have lived through previous hype cycles cannot help but echo the mantra that “winter is coming.”

The uncritical embrace of AI technologies has troubling implications for established forms of accountability, and for the protection of our most vulnerable populations. AI is increasingly being positioned as the answer to every question, in part because AI seems to promise not only efficiency and insight, but also neutrality and fairness — ideals that are often viewed as impossible to achieve through individual human or organizational decision-making processes. The fantasies and promises of AI often obscure the limitations of the field and the complicated trade-offs of technical work done under the rubric of “AI.”

Image via Flickr

Distinguishing AI and Magic

“Working like magic” is a familiar refrain in the marketing materials of new technologies, especially those involving AI. From one perspective, this makes sense: Working like magic implies impressive and seamless functionality and the means by which the effect was achieved is hidden from view or even irrelevant. Yet, from another perspective, implying something works like magic focuses attention on the end result, denying an accounting of the means by which that end result was reached.

As an anthropologist, I take magic very seriously, and I have felt uneasy about the familiar equation between technology and magic for some time. Anthropologist Alfred Gell proposed that a defining feature of magic, as an orientating framework of actions and consequences in the world, is that it is “‘costless’ in terms of the kind of drudgery, hazards, and investments that actual technical activity inevitably requires. Production ‘by magic’ is production minus the disadvantageous side-effects, such as struggle, effort, etc,” (Gell, 1988, p. 9).

To evoke magic is not only to provide an alternative regime of causal relations, but also to minimize attention to the methods and resources required to carry out a particular effect. Magic denies an accounting of what went into making something work, or that it required work at all.

Screen still from Justin Timberlake’s Filthy (Official Video) on YouTube

Many are beginning to worry about the methods that constitute the meteoric rise of machine learning, which is what creates the “intelligence” of most contemporary AI technologies. (It’s hard to deny that machine learning is approaching cult status when the latest Justin Timberlake music video takes place at a deep learning conference.) A much-talked about presentation by Ali Rahami last month at NIPS, one of the largest machine learning conferences, provocatively suggested that “machine learning has become alchemy.” While the ancient art of alchemy led to some important discoveries, the theories that it was built upon were fundamentally incorrect.

The parallel to deep learning is that deep learning models are currently poorly understood and under-theorized; they seem to work, and that is enough. Rahimi concluded: “I would like to live in a society whose systems are built on top of verifiable, rigorous, thorough knowledge, and not on alchemy.” There were dissenters, of course, most prominent among them Yann LeCunn, Facebook Director of AI Research. But the theme of Rahimi’s talk seems to have rung true. In addition to the fraught relationship between cause and effect, a recently published paper by Gary Marcus raised concerns about the fundamental limits of deep learning to achieve all that its proponents suggest is possible.

Alchemic Symbols via Wikimedia

For the technological advancements to endure, it is imperative to ground both the practice and rhetoric of AI. Doing so requires developing the methodological frameworks to reflexively account for the strengths and weaknesses of both the technical practices, and the claims that can be produced through machine learning-based systems.

In the long run, the biggest challenge for a hype-driven ecosystem where countless public and private sector actors feel the need to implement AI systems is the plethora of poorly-constructed models, produced through methodologically unsound practices. As long as those models are over-confidently produced and viewed as infallible, there is limited space for interrogating how cultural logics get baked into the very practice of machine learning.

For now, moving past the current hype-saturated environment will require work on many fronts. One way is to interrogate and push back on the claims made by AI technologies. This fall, several papers came out that provided critical histories and contextualizations of the recent AI hype. Within the journalism community, there has been an emerging conversation about how to critically cover technology, specifically health technology, in the wake of what appears to be the sensational over-hyping of IBM Watson Health.

Some important questions to always ask include: What and who informed the development of this technology? What are the methods used? What are the strengths of this method? And what are the weaknesses? What are the claims being made about the relations between cause and effect? And: Are these relations sufficient and fair in all the contexts in which a technology is being used?

Asking these questions is in everyone’s interest. When proponents of AI rely on mobilizing imaginaries of AI as working like magic and glossing over the limitations of technological systems, they run the risk of undermining the power and potential of the very systems they are building.

M.C. Elish is the lead researcher for the Data & Society Intelligence & Autonomy initiative, which develops grounded, qualitative research to inform the design, evaluation, and regulation of AI-driven systems.

Later this year, the team will release work from the Mapping Human Infrastructures of AI project; a series of ethnographically-informed studies of intelligent systems in which human labor plays an integral part, exploring how and why the constitutive human elements of artificial intelligence are often obscured or rendered invisible. — Ed.

--

--

m.c. elish
Data & Society: Points

ph.d. anthropologist of robots, work and AI; research lead @ data & society