Deepfakes: Who Can We Trust?
Artificial Intelligence (AI) has massive potential to transform the world in which we live. This technology can add significant value to businesses and society. AI has a broad range of applications in many sectors like manufacturing, healthcare, financial services, cybersecurity, national security.
AI is exciting. However, AI exposes us to several risks. Deepfakes are a prime example of such risks. Deepfakes can help malicious actors to create fake images or videos. Deepfakes are challenging to determine whether they are real or fake. Deepfakes can create havoc in society, which can potentially impact all forms of media, regardless of whether it is video or images. Read on, as we explain the risks arising from Deepfakes.
Deepfakes: What Are They And How Do They Work?
The name “Deepfakes” comes from the combining of “deep learning” and “fake”. “Deep learning” is a branch of machine learning (ML), an advanced subcategory within AI. Deep learning uses mysterious algorithms that are “Artificial Neural Networks” (ANNs), specifically, Deepfakes utilizes a machine learning framework known as generative adversarial networks; developed by Ian Goodfellow. ANNs intend to simulate the functions of the brain, and deep learning extends this capability. With the help of deep learning, you can “train” large neural networks with plenty of data and significant computing power.
Deepfake is an application of deep learning, and it creates fake images, audios, and videos. Using Deepfake, a malicious actor can superimpose one persons’ face into an image or video of another person. While the act of morphing photographs, audios, and videos isn’t new, Deepfake makes it harder to spot fake content.
How Deepfakes Work:
- Deepfakes use an autoencoder, which is a deep neural network.
- The autoencoder “learns” to process inputs concerning one individual.
- It compresses the inputs into small pieces of encoding.
- Deep learning makes the autoencoder to recreate the input instead of just returning what it gets.
- The neural network uses a decoder to recreate the content.
- A Deepfake program uses the same encoder to process inputs concerning another person.
- It uses another decoder to recreate the input concerning the second person.
- The Deepfake program then uses the second decoder to create content combining two sets of inputs.
Effectively, this content would have the second person doing/saying things that they did not do in real life. There’s more to deepfakes though! This is just a basic overview, you can read this paper for a detailed explanation. This Github repo demonstrates an implementation of Deepfakes for a range of scenarios.
How Deepfakes Make It Difficult To See The Truth?
Naturally, the working of Deepfakes that we mentioned above will create fake content that won’t be hard to spot. Trained observers will find enough mismatches between two individuals in the face, style, body language, voice, etc.
Deepfakes use a deep learning tool called “Generative Adversarial Network” (GAN). By using a GAN, Deepfake programs verify whether the images and actions of the second person in the content mentioned above look genuine enough.
To do this, GANs use a large number of images, audios, and videos of both individuals. Deepfake programs “train” their GANs with large datasets concerning both individuals.
In turn, the GAN gives feedback to the deep neural network in the Deepfake program. It examines the content produced by the Deepfake program and rejects it if it doesn’t look genuine enough.
The deep neural network in the Deepfake program “learns” once more, possibly with an even larger dataset of images/audios/videos. It then creates better fake content, which is examined by the GAN again. This process continues until the Deepfake program creates content that’s hard to spot as fake.
Commonly Discussed Dangers From Deepfakes
Although Deepfakes have emerged relatively recently, many people are already talking about the dangers it poses. Most observers comment on the following risks:
Political Disinformation Campaigns
By using Deepfakes, unethical political campaigners can launch highly successful disinformation campaigns against their political opponents. Discussions around this risk are gathering momentum since the US presidential election is approaching. Many observers already debated whether the 2016 US presidential campaign was manipulated using modern technologies. Quite a few observers and politicians are wary about the effects that Deepfakes can have on future elections.
To illustrate this point, let’s look at another example from Belgium. Socialistische Partij Anders (sp.a), a political party in Belgium had posted a Deepfake video on Facebook about Donald Trump, the US president. While sp.a clarified that video was fake, the subject matter of the footage invited polarized reactions from viewers.
Political disinformation campaigns might not provide such clarifications in the future. Deepfakes videos made with sophistication can mislead voters during an election campaign. Malicious actors can access Deepfake tools easily, which heightens the risk.
It takes time to detect whether a well-made Deepfake video is indeed fake. Elections are time-bound events. The more time it takes to conclusively spot a Deepfake video as a fake, the more significant is the disadvantage to the targeted politician.
Celebrities Embroiled In Porn Related Scandals
A malicious actor can find thousands of videos showing a celebrity on the internet. These videos show their faces from many different angles. Moreover, one can find many samples of their body language and style. On the other hand, the internet is swarming with pornographic videos.
A large number of videos available provide an enormous amount of “training” datasets for Deepfake algorithms. Add to this the fact that Deepfake algorithms are very readily available on popular repositories like GitHub. You are potentially seeing a surfeit of celebrity videos involving pornography.
Celebrities will likely find it hard to protect their reputation, which opens doors to crimes like extortion. Deepfakes also make crimes related to “revenge porn” more comfortable to pull off.
How Can Deepfakes Impact The Animation Industry?
While the risks mentioned above attract a lot of attention, these aren’t the only risks from Deepfakes. Deepfakes and significantly impact the global animation industry.
According to a Marketing and Research report, the global animation industry is witnessing a healthy growth rate. From $305.75 billion in 2017, the global animation market will likely grow to $404.83 in 2023. And this translates to a compound annual growth rate of 4.79% during the 2018–2023 period.
Creativity and skills are at a premium as far as the animation industry is concerned. In recent years, this industry has seen a significant infusion of advanced tools which spurred the growth further. Animation designers have created a wide range of popular content.
With Deepfake programs lurking around, the animation industry faces significant risks. With so much animation content available easily on the internet, malicious actors can efficiently “train” their Deepfake programs.
These programs can create seemingly genuine animation content and undercut actual animation designers that rely on their creativity and skills. The more the datasets available for training, the harder it will be for someone to differentiate fakes from original content.
Combating Deepfakes: The Way Forward
Many discussions about combating Deepfakes are already taking place. Two main strategies have emerged, which are as follows:
- Legislative and regulatory measures: This involves actions from governments and lawmakers. They need to study the risks from Deepfakes and take legislative measures. Countries that already have legislation in place to combat fake news and disinformation campaigns need to strengthen their regulatory frameworks to combat Deepfakes.
- Technical measures: Deepfake programs use deep learning to progressively improve the fake content that they create; however, deep learning can work the other way round too. By training deep learning algorithms with large datasets containing genuine and fake content, one can better spot Deepfake content.
While artificial intelligence has a lot of potentials, Deepfakes use the same technology to expose us to risks. Not only can Deepfakes increase disinformation campaigns, extortions, revenge porn, etc., but it can adversely impact the animation industry. Easily accessible Deepfake programs can create fake animations, which can seriously undercut animation designers and producers. Legislative and regulatory responses are essential to combat Deepfakes. However, technical measures are crucial too. It’s time for the world to take note of the risks from Deepfakes and mitigate them adequately.
I do not have a political agenda for publishing this article. I simply wish to demonstrate how technology can be used for positive purposes for the enhancement of humanity or not— Shantesh Mani.
Shantesh Mani is a data scientist and machine learning engineer.
Shantesh has founded several companies; some were successes; some were not. He has developed a formula over time through massive sweat energy and education over years of training and overcoming challenges. He is now focused on his life mission which has taken 10 years to form and execute.
Sharing, serving, empowering & growing, these values are ingrained as his core philosophy on how to live life.
He is building an organisation, unlike any other, a Human First, Negentropic Enterprise.
Shantesh has one mission…but it’s a big one!
To execute a hypothesis which he believes should be the future that this world deserves.
Want to know the hypothesis? Reach out to Shantesh, or wait till he shares the entire framework to the world, open-source and accessible for all.
He hopes to attract the greatest minds, to take it and improve it through their innovative efforts and input through agile iterations.
Everyone is talking about the next big tech disruption. AI changing the world, and yes, this is true.
Before we change the world, we must change ourselves.
Our core architecture is founded on putting humans first, without any exceptions ever. Abundance Mindsets leads to successful and fulfilled humans from within who create vital and contagious energies vibrating at frequencies throughout the universe.
Resulting in deep insights into clients needs & market demands. Gaining a profound understanding of customers, allowing superior ability to serve and work smarter. Achieve greater efficiencies, output, earning capabilities.
Most importantly, empower people and build a community. Through listening, having empathy, awareness, commitment for growth, foresight, stewardship and excellence, Shantesh will achieve his mission.
- Develop competitive advantages.
- Build solid foundations.
- Structure Robust business models.
- Dominate industries.
- Educate people on the future of the world in the AI evolution.
- Embrace philosophies and have a personal constitution for life.
How will the AI evolution impact society?
How we work, function, live + play…
Are you ready for it?
His mission is to give value to as many people as he can. From business owners, developers right through to students/graduates trying to understand the recruitment process and the importance of have a digital footprint and building their personal brand. Please get in touch, he wants to hear your story!
We just launched a new Quora Space, please check it out :)
Author’s Final Note:
If you have read this article all the way to this point, I would personally like to thank you and sincerely express my gratitude and appreciation for taking the time to do so. I would love to hear your feedback and comments, regardless of the sentiment, it’s the only way I can learn, grow and provide greater value :) I am humbled by the amazing community on here and I am feeling blessed to be given the chance to connect with such wonderful people. — Shantesh Mani