DeepFake: Trivial or Not?

Debankita Basu
Deep Learning Digest
7 min readJan 27, 2021
An example of deepfake technology: in a scene from Man of Steel, actress Amy Adams in the original (left) is modified to have the face of actor Nicolas Cage (right)

Summary

Deepfakes, a tool to create fake images and videos of original pictures, has never been thought of as an issue as it always had been for entertainment purposes. We have been shoving it away by considering it too trivial. How fast our world is advancing with the latest smartphones and cameras which are accessible to everyone, made images and online videos tend to be the focal point of any news or trends. A single image or video can create huge movements. Deepfakes have been misused now as it has been an available gizmo for everyone. Be it through apps on the phone or any open — source code anyone could try it out for a hands-on experience. This has led to serious problems for the masses and even famous individuals. Even though algorithms are made for detecting it, deepfake makers are getting ahead of the game. If not taken seriously, it can cause political war and whatnot. This report goes over the main facts — how deepfakes are made, how it is being misused till now, what are the methods to detect it and how far can it go and how much our future will be at stake. The report gives the required information for the readers to educate themselves on this topic and be aware of it.

How are deepfakes made?

Deepfakes are a fake generation of audio and video clips of people. It is obtained by an artificial neural network, a machine learning technique. To make deepfakes, six different neural networks are combined — Encoder-Decoder Networks (ED), Convolution Neural Network (CNN), Generative Adversarial Networks (GNN), Image to Image Translation(pix2pix), CycleGAN and Recurrent Neural Networks (RNN)[1]. These neural networks use vast amounts of data as input, say videos and photos of people, use them as a “training” set and then “learn” them to create new videos or photos which look like an exact replica of the original photo or video. Once it is trained, the changes made with the face or body seem so realistic that such programs can easily put someone else’s face on someone’s body, or make it convincingly seem like that they’re saying something which in reality they never did.

For example, the founder of Facebook Mark Zuckerberg’s original video was manipulated and it sounded like he was giving a sinister speech of Facebook being the hub of power and how he was exploiting the power by regulating the world’s data and secrets. There are numerous other videos still altered with this AI technology which is advancing day by day as China is now making an app that is making it easier to fake it and in no time it would be indistinguishable[2].

Misuse of deep fake now

People believed that the deepfake technology can be misused [3]. There were three different scenarios where deepfakes were used. And all those were quite detrimental especially if the deepfake is done with more sophisticated software it gets hard to differentiate from the original and can cause a lot of damage. As of right now, there are applications that allow deep faking. Although it is easy to tell if that application has been used to deepfake some image, there are ways to make it seem as if it is the original. Moreover, there is a GitHub repository that contains code for creating deepfakes and is available to the public which means anyone can make deep fakes, and as time goes on it will only get easier to create deep fakes as more and more research will be done and such software will just get more accessible.

Deep Fake detection

Usually, it is easy to detect deep fakes but nowadays due to complex algorithms, it is getting even harder and harder to tell if a video is a deep fake or not, however, there are methods to find out whether a video or picture is a deep fake. One of those includes looking at inconsistencies in the physical features of the person in the video usually when a deep fake is made the audio isn’t synchronized well enough with the video so when the lip movement looks odd it is a good enough indicator for a video being a deep fake.

There are two major ways of detecting deep fake videos one of which as discussed above is by looking at the physical aspects of deep fake videos and the other being using signal level artifacts that appear during the synthesis process[4]. Deep fake detection algorithms work well when tested but when they are used to test other videos in real life it gets difficult for the algorithm to differentiate, especially on social media websites it’s harder since the metadata of the videos is dumped in order to accommodate large amounts of videos, which sometimes is helpful in detecting weather the video has been altered or not.

Future implications of deep fakes

The old saying ‘Seeing is believing’’ has become questionable. Be it fake news or fake pictures or videos, humans tend to give in which creates a ruckus. Deepfakes originally was meant for funny videos and a good laugh. But with the advancement in AI, this can be misused to cause political disruption. Falsified videos can be made to convince the public to go against one’s own country and politicians. The members of Congress are concerned that this newbie technology will give illegitimate authorization to some to bring propaganda against other democratic countries or the United States by tampering with the electoral procedure [5]. The deep fake technology is also being sponsored by the military as part of a wider investment into cyberwar tactics. Earlier guns and weapons could induce war among countries and new technologies have brought terror to this world. Samsung, one of the leading and influential digital media enterprises, has been doing research on deepfakes and in May 2019 they released a report that indicates how fast technology is emerging. The report stated Samsung could produce “talking head” videos of a person from a single picture of that person’s face. For a better outcome, if 32 pictures can be procured of that person, these talking videos will achieve the pinnacle of realism [6]. Eventually, deepfakes will blur the line between the true and false.

https://www.youtube.com/watch?v=EfREntgxmDs&ab_channel=CBSThisMorning

Conclusion

Deepfakes are not trivial and it should be something which everyone is aware of. Though political deepfakes may not mislead folks, they may propagate ambiguity in people’s minds. With uncertainty comes trust issues and people may lose faith in social media and news reports on social media. The solutions to these would be using technologies to detect fake videos, be more aware of what is happening in the country and around us, and enhance media literacy. Educating the masses and bringing forward the technology gone wrong concept would be helpful for people who are not too efficient in technology and computing. This raises questions like even if videos are detected to be fake, what should be the next step? If we look on the less ugly side, the feigned nature of these deepfakes will feasibly bring about further widespread cynicism about the events we see in social media or read about. For what it’s worth, people should realize before it’s too late.

Works cited

[1] Mirsky, Yisroel, and Wenke Lee. The Creation and Detection of Deepfakes: A Survey. 2020. EBSCOhost, search.ebscohost.com/login.aspx?direct=true&db=edsarx&AN=edsarx.2004.11138&site=eds-live&scope=site.

[2] “How easy is it to Deepfake? How do they work and just how harmful can they be? Deepfakes — or Deep Faking — are being a real problem in the fake news era, but what exactly are these troublesome videos?” Daily Mirror [London, England], 28 Sept. 2019, p. NA. Gale General OneFile, https://link-gale-com.silk.library.umass.edu/apps/doc/A601080665/ITOF?u=mlin_w_umassamh&sid=ITOF&xid=bc22694d. Accessed 14 June 2020.

[3] Houde, Stephanie, et al. “Business (mis) Use Cases of Generative AI.” arXiv preprint arXiv:2003.07679 (2020).

[4] Lyu, Siwei. DeepFake Detection: Current Challenges and Next Steps. 2020. EBSCOhost, search.ebscohost.com/login.aspx?direct=true&db=edsarx&AN=edsarx.2003.09234&site=eds-live&scope=site.

[5] D. Reichmann, Lawmakers Want to Know How ‘Deepfake’ Videos of Real People Could Threaten National Security, Washington, DC, USA, Sep. 2018.

[6] M. Wilson, Oh No Samsung’s AI Lab Can Create a Video of You From a Single Still Photo, New York, NY, USA, May 2019.

--

--

Debankita Basu
Deep Learning Digest

New graduate with a Bachelor degree in Computer Science passionate about Data Science and writing