Seeing is not believing: how deepfakes are about to transform our reality

Noémie Kempf
La Nouvelle Frontiere
5 min readOct 23, 2019

“If you can change visual images, you can change History.”

This is how Hany Farid, Professor at UC Berkeley, sums up one of the most pressing challenges to come for our society. In a world where anyone can create and broadcast their own content at scale, it’s becoming growingly difficult to tell what’s true and what isn’t. An AI certainly isn’t helping: today, pieces of synthetic multimedia content generated by neural networks and called deepfakes, are becoming as realistic as it gets — and impossible to unmask. Could it alter the way we perceive and interpret reality?

Photo by Elijah O’Donnell on Unsplash

First things first: what’s a deepfake?

Let’s start by clarifying a common mistake when it comes to this concept: deepfake and fake news are not the same thing.

🗞️ fake news are a type of propaganda, which consists in deliberately publishing false information, disinformation campaigns or hoaxes, which are usually picked up and broadcasted by traditional media and social networks. In a way, you could say that fake news are the modern, digital version of traditional political propaganda.

🤖 A deepfake is a technique of audiovisual content synthesis, which is generated by an artificial intelligence (AI) algorithm. It generally consists in superimposing existing images or videos on top of each other to create a unique piece of (fake) content. One of the most popular deepfakes is for example the fake speech from Barack Obama, who stoically says in front of a camera “President Trump is a total and complete dipshit”.

Deepfakes can thus be considered as a sub-category of fake news, and more specifically, audiovisual content (image, video and audio), and could then theoretically be used as a massive reputation destruction weapon, directed towards any public figure.

In this deepfake example, the above video is the “source” video, played by a professional actor, on which is later superimposed a video of Vladimir Poutine, in order to blend the physical appearance of the Russian president with the facial expressions and speech of the anonymous actor.

But the use ofdeepfakes doesn’t stop at twisting the truth and manipulating facts told by international leaders and public figures. Back in 2017, the porn industry, always ahead of its time, seized the opportunity. Fake sex tapes featuring popular actresses such as Emma Watson, Taylor Swift or Angelina Jolie went viral and shocked everyone before being identified as fake.

From FaceApp to Canny AI : deepfakes are already ingrained in our lives.

The power of deepfakes illustrated on a toddler 😱Copyright : KnowYourMeme.com

If you have a smartphone and friends (you still with me?), you cannot have missed the bazillion selfies of people this summer, looking like their older self.

FaceApp, created by Russian developpers, went viral across the world in just a matter of days. The application basically allows users to check if they’ll still be somehow dateable within 20 years or so (as for myself, it looks like I’ll have the seduction power of an Egyptian mummy)

Behind the impressive visual results, hides the same technology than the one used for deepfake videos : a class of machine learning algorithms whose sweet name is Generative Adversarial Network (GANs)

Let’s not dig deep into the technology itself, but you’ll notice that it didn’t bother many of us to alter personal content, and share it on our social networks to be seen and seized by anyone — despite a controversy quickly raising around FaceApp’s motivations. Indeed, for a time, the app was suspected by the FBI to hijack the data of its users.

Besides creating synthesized content for leisure and entertainment applications, the technology has also been-n integrated by various startups, which turned it into a tool which promises to deliver outstanding efficiency:

🎤 Modulate allows its users to take the voice of anyone they choose.

🎥 Canny AI uses AI to translate advertising spots in any language, while keeping the same actor or actress (think of Brad Pitt speaking an impeccable Swahili).

Synthesized content is much further integrated in our daily lives than most of us can see, for two reasons:

1/ Deepfakes are getting easier and easier to create, and can now be developed by people with lower technical knowledge. Depending on where the technology goes, it could be only a matter of years until anyone can do it!

2/ Before creating alternative realities, we have been creating augmented realities for years, by applying filters and enhancements to our social lives online.

From filter to fake : our reality has already started to change.

Have you ever used a Snapchet filter? Have you ever wondered what was the tech behind the notorious rabbit ears or flower crown which have become the new favorite of Instagram influencers?

In the beginning of the 2010’s, we unconsciously opened the door to deepfakes by applying more and more filters to our lives: from our holiday pictures published on Unsplash, to our Instagram daily Stories, … without ever wondering about the long-term impact this permanent twitching of reality might have on ourselves and others. Deepfakes simply are the next level of this continuous lie.

And this next level is neither fun, nor glorious. On the contrary, so far, it’s been mainly used for nefarious purposes:

A few months ago, the CEO of a UK energy company was tricked by a deepfake — he received a call from his boss (which was in fact a deepfake), asking him to wire 200,000€ to the bank account of a “Hungarian supplier”, which of course, turned out to be that of malicious hackers.

With the constant improvement in quality of deepfakes, we are all running the risk to be tricked at some point : in theory, nothing is stopping any ill-intentioned person to create a fake video, with the simple purpose of damaging your reputation, to make you lose your job, to shatter your relationship, …

Of course, part of the deepfakes issue is calling for better, worldwide data policies, and a stronger involvement from policy makers, but it seems that the topic goes way beyond this.

No matter what happens, it’s too late for us to stop the acceleration of deepfakes’ production and the manipulation of information at scale.

However, it’s still time to sort the sources of information and content which we are ready to follow and believe. But that makes us merely passive followers, and not actors. Which brings the question: will the world of tomorrow be fragmented into closed, restricted “information networks” or “certified communities”? How will we ensure we can still believe what we see?

BONUS: I had the chance to discuss deepfakes during the amazing seminar on Artifical Intelligence organized by the talented NOVA team, and to give a brief restitution of the conversation (2 minutes tops! ⏱️)

If you liked this article, share the love and subscribe to this publication ! 💘

Learn More

The Wall Street Journal published an interesting investigation on the topic— where you learn that even the Pentagon is on it! 😱

--

--

Noémie Kempf
La Nouvelle Frontiere

Storyteller & Brand Strategist. Also meme addict 💎, travel enthusiast 🌏, and part-time nerd 🤓.