Neil Turkewitz
6 min readJul 29, 2021
Photo ©2021 Neil Turkewitz

Anthony Bourdain, Voice Cloning & The Precarious State of Humanity

by Neil Turkewitz

Every once in awhile, the clouds of vagueness and ambiguity dissipate and there’s a moment of clarity in which truth — or justice, or morality, emerges in a form visible to all. One such moment of clarity occurred when Kashmir Hill published her brilliant NY Times piece entitled: “The Secretive Company That Might End Privacy As We Know It,” which exposed facial recognition company, Clearview, both for the fact that it scraped Facebook and other sites for images to populate its database, and for the likelihood that the resulting information would reinforce existing injustice, to say nothing of ending privacy. As she noted in a subsequent Twitter thread, “After I reported the existence of Clearview AI in January 2020, the company’s world exploded: lawsuits, international investigations, letters from senators.”

More recently, another moment of clarity has emerged, thanks to Helen Rosner’s article in the New Yorker entitled “The Ethics of a Deepfake Anthony Bourdain Voice,” in which she reports on the use of voice cloning in a documentary about Bourdain, and director Morgan Neville’s offhand comment to her that “we can have a documentary-ethics panel about it later.” Interestingly, even though Neville’s use of AI to produce a voice clone may have been authorized (although Bourdain’s widow denies having given any assent), and only resulted in Bourdain appearing to speak words which were actually his (but written, not spoken) it generated a tsunami of negative responses — at least partially triggered by Neville’s callous, if unintended, recitation of a Silicon Valley fealty to the idea of moving fast and breaking things. Of choosing to ask for forgiveness rather than permission. Of believing the perceived needs of “innovation” bend our normative expectations.

Here is a sampling of some of the reactions:

Artificial intelligence is one of the hottest (and most divisive) topics in entertainment right now. The filmmakers behind the upcoming Anthony Bourdain documentary “Roadrunner” want to make the case for the technology’s ability to bring our favorite celebrities back to life. However, not everybody’s sold on that idea, and some Twitter users are describing the doc’s use of AI as “unsettling” and “grotesque.”

***

We can have a documentary ethics panel about it later.” Well, that’s today’s most sinister sentence.

***

Deepfaking Bourdain to make him say some lines in your documentary is super fucked up, bro. “We can have a documentary ethics panel about it later” — Are you joking??

***

In response to mounting criticism, Neville has responded to Variety, saying, “There were a few sentences that Tony wrote that he never spoke aloud. With the blessing of his estate and literary agent we used AI technology. It was a modern storytelling technique that I used in a few places where I thought it was important to make Tony’s words come alive.”But some who were close with Bourdain disagree. The late chef’s ex-wife, Ottavia, retweeted the Variety article, adding, “I certainly was NOT the one who said Tony would be cool with that.”

***

In a brilliant interview with Project Director of WITNESS, Sam Gregory which I recommend everyone read in full, Justin Hendrix cites Gregory as follows:

“I haven’t seen the film, but it’s described quite well, I think, in the New Yorker piece. And it basically notes that he is voicing over some emails that Anthony Bourdain sent to a friend. And, he has what sounds like the voice of Anthony Bourdain saying some of the lines in the email. And as it turns out in this article, the director reveals to the writer that he used one of the proliferating number of ways you can generate audio that sounds like someone. Sort of deep fake audio to recreate those words.”

And I think that’s shocked people, right? What annoys people is he then goes on to say, “Oh, maybe someday we should have a documentary ethics panel on this.” This is one of those discussions we need to be having about ‘when is it okay to synthesize someone’s face or audio or body and use it’. And I think it’s bringing up all these questions for folks around consent and disclosure and appropriateness.”

Finally, the folks over at Forbes had a chilling, observation:

“With the increasing availability, and persuasiveness, of deepfake technology, expect to see more digital necromancy on the horizon. Despite Twitter backlashes and public discomfort, digitally resurrecting the dead for entertainment and profit seems to be continuing, unchecked.”

I fear that the Forbes piece may be right, but would offer this — allowing the “unchecked” development of AI-created cloning/fakery is not a self-fulfilling proposition. It is up to us to demand that we engage in the ethics (and legal) discussion before questionable uses are made, not afterwards. It is up to us to build the normative and legal guardrails to ensure that technology develops in a way that advances the human condition rather than dooming it.

Moments of clarity must not be wasted. So let’s not waste it. This particular moment has clarified for so many the importance of consent in determining the use of technologies. Sadly, this is not the trajectory of our current path. In particular, governments around the globe are being lobbied to expand exceptions to copyright to allow unpermissioned text and data mining, and being told that economic competition and technological development demand relaxation of the rules of permission-based commerce. That Innovation requires free access to the constitutive parts of our humanity. That competition with China doesn’t afford us the luxury of standing by our principles. I humbly suggest that our humanity is too dear to us to permit this taking, and that we must resist abandoning our values to construct a new world. Indeed, what’s the point if that new world doesn’t reflect how we see ourselves, and who we want to be?

The precise place to draw the line about when or where to permit unauthorized use of images, voice, words and other manifestations of our individuality may be somewhat unclear, but we must always be clear that we are engaged in such a delineation. Proponents of expanded copyright exceptions put forward a vision that such exceptions are the default requirement of the digital age. I urge everyone to demand greater respect for human agency, and to reject the notion that there are, in fact, any defaults. Life may, in some respects, contain elements of chance. To be like a box of chocolates. But life is not a box of chocolates in which our provisioning is purely random and out of our control.

The momentary clarity of the Bourdain incident reminds us that the very act of ingesting personal data for the purposes of informing AI is a moment that challenges fundamental aspects of our humanity and our normative expectations of conduct.

The issue is not only how AI is used, but the extent to which we are prepared to allow technology to capture who we are without our consent. To force us to be unwilling servants in the creation of a world outside of our control. I say we don’t let them. Some things are that simple.

Note: For some insights on the copyright implications of copying materials to train AI, I refer readers to a recent decision by Judge Leonard Stark of the US District Court for the District of Delaware, In Thomson Reuters Enterprise Centre GmbH and West Publishing Corp., v. ROSS Intelligence Inc., the Court found that Reuters’ assertion that defendant had engaged in “mass, illicit downloading of copyrighted Westlaw material through LegalEase, which material was then used to develop the ROSS platform” stated a cause of action for copyright infringement. Interestingly, there were no findings that the defendant’s output was substantially similar to plaintiff’s works; in other words, this was simply copying for training the system. This interim decision rejecting a motion to dismiss is encouraging, and the case bears watching.