Deepfake News with Russell Brandom

Episode #04: Shownotes

The Deepfake Podcast
The Deepfake Podcast

--

| Listen on Apple Podcasts | Listen on Spotify | Listen on Google Podcasts |

In this episode of the Deepfake, we talk to journalist Russell Brandom about deepfakes and online influence campaigns, truth in media, what synthetic media’s product cycle looks like, and whether AI can “understand” as framed by analytic philosophy.

Russell Brandom: “When you falsify media, you’re crossing this line into something that is verifiably untrue. Whereas, a lot of these [influence] campaigns aren’t about making claims that are true or false, because then you can be proven wrong.”

About Russell

Russell Brandom is a policy editor for The Verge. He has written extensively about the internet, law, culture and media, and in 2019 wrote a widely circulated article titled “Deepfake Propaganda is Not a Real Problem” (The Verge) that critically reframed the question of deepfakes with respect to political influence campaigns and election interference. He is also the author of the underrated tweet: Who decided to call it “deepfake regulation” instead of “GAN control”

We are thankful for the opportunity to speak to Russell about how to understand the impact and propagation of manipulated media on the internet.

[0:00-4:09] Introductions: Tech & Politics

[4:10-9:50] Deepfakes: a Problem?

“We have a problem with truth in our society… There’s this real crisis of trust. You don’t know who listen to, you don’t know who to trust, because institutions are failing us all.”

Deepfakes can worsen this crisis of trust, though we haven’t seen deepfakes used in political influence campaigns because they can be detected as manipulated media:

“When you falsify media, you’re crossing this line into something that is verifiably untrue. Whereas, a lot of these [influence] campaigns aren’t about making claims that are true or false, because then you can be proven wrong.”

Verifiably false narratives can be challenged and disproven:

“You could be found out as having faked it.”

[9:51-13:22] Does Truth Need a Context?

“It’s kinda hard to know what’s true.”

How we trust authority figures when we cannot ascertain what they say is true? False narratives within the government justification for the Iraq War eroded trust in public institutions.

Context matters for engaging with media that reinforces our views of reality, but also heightens contradictions (like cognitive dissonance) that were already present.

[13:23-20:33] Synthetic Media’s Product Cycle

Technology, like the AI-systems that generate synthetic media, improve over time. Product cycles, however, sometimes require more fundamental changes to the underlying technology.

Sometimes deepfakes are used to generate media for sockpuppet accounts on online social networks, but under-trained algorithms can generate human faces with noticeably erroneous features, like malformed ears or “an odd number of teeth.”

AI is capable of performing specialized human work:

[20:34-26:35] Neural Networks Can “Understand”

Thanks for tuning in!

You can find us on Twitter @AudioDeepfake, and you can listen to more episodes on our website: www.deepfakepodcast.ml

--

--