Deepfakes and the Future of Digital Distrust

Deepfakes: seemingly real videos that depict someone doing or saying something they’ve never done or said. The mere existence of this video doctoring technique poses a threat to the credibility of any and all video.

The term can be traced back to a Reddit account with the username “deepfakes”, who started posting digitally altered pornography in 2017. It is commonly thought that the “deep” comes from “deep learning,” an Artificial Intelligence technique employed to create these fake videos.

The concept received mainstream recognition in April of 2018 when BuzzFeed posted a PSA that depicted Barack Obama delivering an out-of-character address. The video was fake and Obama’s computer-generated speech became a symbolic warning against taking videos at face value.

Jordan Peele voices a fake Obama speech in Buzzfeed’s PSA

The technology is nothing new. The movie industry has gone so far as to digitally recreate deceased actors in recent films like Star Wars: Rogue One. But this is the first time that such advanced video manipulation software has been available for anyone to use. The widespread availability of these tools brings up a host of ethical and legal questions — especially when actors end up having their likeness reproduced without consent.

Fortunately, the tech isn’t perfect yet — it’s not as easy as it might seem to create a convincing fake. There are also still a few ways to spot a fake face-swapped video, like strange inhuman blinking or small visual artefacts where the algorithm has failed.

These problems will inevitably be fixed. What will it mean for the populace when the technology improves? What happens when anyone with a few minutes to spare can create an incriminating, career-ruining video of an opponent?

The future of deepfakes is a topic I explored in a presentation for my Foundations of Digital Media class, an introductory course in Ryerson’s Master of Digital Media program. Our assignment was to present a technology ten years into the future.

I imagined that some user-friendly mobile deepfake app became available and led to the “Deepfake crisis of 2022” — once everyone had the power to create a convincing doctored video, no footage could be trusted. I presumed that this would usher in an era of complete digital distrust where nobody would believe any video at all. I invented “Alius” as a presumed response; a fictional futuristic alibi service.

The fictional service involved a body-camera constantly filming its user and storing that footage in a secure database. That way users could refer back to this footage to deny the validity of any incriminating deepfakes.

The tagline was “take back control of your data by letting go”. It was meant to be a sort of dystopian parody, but the more time I spent preparing for the presentation, the less abstract it began to feel.

Since the presentation, I’ve come across a service called Amber Authenticate which uses blockchain to regularly confirm that a video’s content has not been altered. They’re not the only new company in this emerging space. A blockchain company called Factom has been working with the US Department of Homeland Security to build a system capable of confirming a video’s authenticity.

AI-driven video manipulation software is here to stay, and the technology will only become more convincing and accessible in the future. While there are already companies creating services to identify video tampering, it will take some time to widely implement their solutions.

Until then, how skeptical should we be of incriminating videos? What if people start accusing genuinely incriminating videos of having been tampered with? As truth-defying technologies become more widespread and easier to use, new mindsets must be established surrounding the trustworthiness of what we see.

--

--