Seeing Is No Longer Believing

Madeline Panther
The Public Ear
Published in
4 min readApr 1, 2019

The Danger of Deep Fake Identities

By Madeline Panther

“We’re entering into an era in which our enemies can make it look like anyone is saying anything at any point in time.” These words were spoken by Barack Obama; well, so it seems. A video posted by the digital media company, Buzzfeed in April last year, showed the former president giving a government address that was a so called “Deep Fake” impersonation by comedian Jordan Peele.

The video is uncomfortably realistic and is one of the first Deep Fake videos that has involved a politician. Buzzfeed produced the video to educate the public on the advancing dangers of artificial intelligence by using these Deep Fakes to threaten our inclination to trust the reliability of evidence that we see with our own eyes.

You may think this form of video manipulation seems quite new and advanced, but have you ever used the Face Swap filter on Snapchat? Face Swap was only released in late 2017 with the ability to switch faces over in seconds. Now over a year later and the power to Face Swap isn’t all fun and games.

Since the rise of social media, the relationship with the imagined audiences has always been between the content distributor and their invisible viewers. For example, say a video posted to YouTube has 10,000 views. Who are those viewers? No idea. Audiences have always had the control and ability to be anonymous, but these Deep Fake identities could have the capability to reverse these roles.

The use of Deep Fakes identities began within the porn industry. Actors such as Gal Gadot, Selena Gomez and Taylor Swift have all fallen victim to this video manipulation. As seen in the images below, these videos can be highly convincing — if you believed Taylor Swift had a sleeve. Not only does this jeopardise the celebrities image, but it completely discredits the original actors. Many adult entertainers are having their content stolen due to Deep Fake identities, with no laws to protect them.

With this technology becoming more readily available to the average internet user, the use of Deep Fake identities has the potential to turn sinister. What if a “deep fake” President Trump appears in a political address saying he has declared a missile attack on Korea. This news could spread to millions in minutes (thanks social media) and cause absolute chaos. Anything is possible if this technology is placed in the wrong hands.

The ability to detect a deep fake video is becoming increasingly harder, especially to the naked eye. In June 2018, a report called “In Ictu Oculi: Exposing AI Generated Fake Face Videos by Detecting Eye Blinking” focused on spotting a deep fake video by looking closely at physical human sensory elements such as blinking, breathing and eye movement. A Deep Fake algorithm inputs facial images from the internet to use as training data. However, when this report was written, Deep Fake videos were mainly created based on still photos rather than videos, and obviously, photos don’t blink.

Now that I’ve infused some fear and unease into your minds about the potential jeopardies of this technology, I’d like to conclude this article with some uplifting news regarding what is being done to stop these Deep Fakes.

For these Deep Fake videos to appear genuine, a computer must be powerful enough to run an algorithm called the Generative Adversarial Network, which only high performing computers can operate. To creating one deep fake video, it would also take 10 hours for a video resolution of 256 × 256 pixels, and 42 hours for 512 × 512 pixels (HFS Research, 2018.) During that 10–42-hour period, the algorithm must compile and input enough data — videos, photos and sounds — to output a realistic Deep Fake.

For example, it is a lot easier to create a credible video of Barack Obama due to the amount of video content that is readily available on the internet versus someone who never posts videos online.

An article posted on The Nieman Lab discussed the risks of these Deep Fakes but also ways in which to spot one.

1. Examine the Source — Is it credible? Who posted it?

2. Finding older versions of the footage

3. Slowing the video down — By going frame by frame a viewer can identify glitches

It is possible to spot a Deep Fake, however being aware of their existence is the first step in revealing the truth. It’s important that these Deep Fake identities are made public knowledge before technology advances in such a way that unrealistic threatening actions are made into apparent fact.

--

--