Dawn of the Deepfakes
We’ve been living in a post-truth world for some time now. A world where the standard for ‘truth’ has disintegrated and the understanding of fact and fiction is wasting away under piles of opinions, alt-facts, fake-news and, quite frankly, a hell of a lot of bullshit. And one of the factors contributing to this rather dystopian world is deepfakes.
Deceptively authentic, deepfakes are the product of advances in the field of Deep Learning, a form of Artificial Intelligence. Deepfakes emerge from a specific type of deep learning in which pairs of algorithms are pitted against each other in “generative adversarial networks,” or GANs. Without getting too technical, the algorithms ‘train’ against each other and the machine learns autonomously — meaning the more they learn, the more developed they become — and the more realistic a deepfake can be.
But where did this come from? Like a lot of technology, it has origins in two worlds: internet communities and the halls of academic research.
Academically, deepfakes came to light in 1997. A project called Video Rewrite modified existing footage of a person speaking, changing the words that were coming from the actor’s mouth in an advanced form of dubbing, a common technique used in movies. While dubbing normally made no attempt to match the movement of the actor’s mouth with the voiceover, the Video Rewrite project used computer-vision techniques to track points on the speaker’s mouth so it could make it appear like they were speaking the words.
The term we know today however apparently originates from a Reddit user called ‘deepfake’, circa 2017. A thread called r/deepfake soon followed, and became filled with doctored images famous people and then, as you can imagine, the porn soon followed with celebrities heads superimposed upon pornographic video. And, for some reason, there was an awful lot of Nicholas Cage showing up in movies…
Creating deepfakes
While one would need a pretty strong grasp of computer science and lots of time to create a robust attempt at a deepfake that looked real, there are now off the shelf apps such as Zao, a deepfake face-swapping app which has already gone viral in China. The app enables you to upload a photograph and insert yourself into hundreds of movies and TV shows apparently within seconds. You can see this Twitter user showing his deepfake based on one image:
So the arrival of Zao, while only available in China, is just an example of how close this is to becoming mainstream. The spread of these services will lower the barriers to entry (note: Zao is free). Soon the only constraint on one’s ability to produce a deepfake will be your imagination. While superimposing yourself into a movie is fairly harmless, this can lead to some pretty serious issues (even aside from the privacy concerns around the data Zao collects).
We’re taught to believe what we see. Yet without a diligent lens for visual digital literacy, the likelihood of falsehoods being perpetuated by such content is incredibly high. It’s what artists Bill Posters and Daniel Howe, the duo who created the Kim Kardashian and Mark Zuckerberg deepfakes of 2019, call “computational propaganda”.
Fighting Fakes
So recently, Facebook announced, in conjunction with Microsoft and a handful of universities, that it’s launching an initiative to develop technology for detecting deepfakes. The Deepfake Detection Challenge (DFDC) intends “to produce technology that everyone can use to better detect when AI has been used to alter a video in order to mislead the viewer.” The DFDC will give technologists and developers a sample data set to work with, alongside grants and rewards to incentivise participation.
But instead of merely firefighting this threat of manipulation, how about a Network that removes the haziness?
The SAFE Network has a unique proposition: Perpetual Data. Like the Internet Archive, the web lives forever on the Network, open to all to dig around the history of a page like a digital archaeologist. Plus the Network keeps track of the updates to every site automatically — so every site edit is stacked on top with the sequence of events recorded in the site’s history (not simply removed and forgotten). It’s a permanent digital archive, available to all humanity.
So how would this work in this case?
Say you publish a video of you speaking publicly at an event. You would be the verifiable publisher of that content, using an ID which others can also validate the content against. Now imagine some doctored deepfake versions start appearing. While this data can’t be taken off the Network, you have evidence that these copies are not from you — you simply need to highlight the ID of the publisher and compare the dates, which would clearly be published after the original version.
The SAFE Network can’t stop computational propaganda or ensure all content on the Network is truthful. Nor would it try to. But we can — and are — building a Network of truth that removes gatekeepers and allows anyone to join. And because the Network is autonomous, it self-manages, self-governs and, fundamentally, is free from manipulation by individuals or groups. The Network can’t be shut down, nor censored, nor controlled. So join the community and be part of the new future.