The Deep Fake Fake-Out

Taylor Barkley
TheUpload
Published in
3 min readNov 16, 2020
Photo by Arthur Reeder on Unsplash

By: Taylor Barkley, Program Officer for Tech and Innovation at Stand Together and the Charles Koch Institute.

All things considered; Election Day in the United States went remarkably well for a year that has been marked by chaos. Despite the many predictions and commentaries on how deep fake technology would alter the course of the election, nothing of the sort occurred.

Technology policy commentary typically involves a heavy dose of prognostication about the ramifications of technologies in the future. This is where utopian or dystopian and optimistic or pessimistic dichotomies come into play. Often the predictions are not concrete enough or not close enough to projected timing to be measurable.

Deep fake commentary presented a different case.

Article after article about how deep fakes will “wreak havoc on society” and destroy democracy projected their negative effects to a specific date: November 3, 2020, the day of the election.

That date has since come and gone without any significant, election-altering video proven to be a deep fake coming to light. We did not see the nightmare scenario where a video was made public the night before the election, potentially altering its course in the final hours.

These hyperbolic panics were perhaps helpful in the sense that they raised awareness, thus probably inoculating society against some negative effects. However, there is a better way. One can raise awareness about the potential negative effects of a technology without saying society will fall apart or that truth will “die.” Hyping the negative can limit development and adoption of the positive use cases. Both approaches have an impact that must be considered.

Any concerns should instead be presented alongside the challenges of deception at such scale. Jeffrey Westling, Resident Technology and Innovation Fellow at the R Street Institute, and I separately enumerated these challenges while acknowledging there is some risk. Westling reviews the rich history of fabricated media and how society has adapted after each development to mitigate concerns about deception. I argue that for deep fakes to really be effective, they need to be limited and targeted, a highly unlikely outcome in a world of ubiquitous, cheap digital connection. Finally, we both argue against the assumption that culture is static. We adapt.

There are also several technical solutions to flag manipulated media. For example, Facebook led an industry effort to host a “Deepfake Detection Challenge” from 2019 to 2020 which attracted “more than 2,000 participants” who tested their detection models. This was an effort to jumpstart research into systems that could automatically test deep fakes. Microsoft developed “Video Authenticator” in 2020 that provides a confidence score to video or still images.

Needless fear and panic are not helpful activities. They consume public attention and distract from more pressing concerns. Indeed, concern about the new and emerging technology is nearly as old as humanity. Pessimists Archive documents the history of negative reactions to technologies we take for granted today. However, we live in a world of trade-offs and spending time and energy worrying about the worst-case scenarios means we are spending less time on the positive use cases and solutions to those problems.

We should be grateful and acknowledge that society, democracy, and the information ecosystem are more robust than often given credit. Culture adapts and solutions are out there, both technical and non-technical. We also benefit from this technology whether it is seeing or hearing historical events that could never have been recorded or the possibility of high quality video calls over low bandwidth. It’s possible to be both realistic and optimistic about the impact of future technologies, including deep fakes.

--

--