How Deepfakes and AI are Changing the Real World

Brad Langdon
8 min readMay 21, 2023
Watch the video version of this article

The number of deep fake videos found online has increased 900% per year since 2019. But more importantly, the quality of these videos and the ease of creating them is ramping up.

The best examples of this so far fall into the category of novelty or entertainment. Results this good take real time and expertise to create.

A music video still by Kendrick Lamar

Now, at the other end of the spectrum we have videos like this one of Volodymyr Zelensky, this clearly fake and most people find it pretty laughable.

A deepfake of Volodymir Zelensky that made it to primetime television

A slightly better example would be this video of Joe Biden announcing that the US is joining the war in Ukraine where he is quoted as saying …

Remember you’re not sending your sons and daughters to war you’re sending them to freedom. God bless our troops and God bless Ukraine.

A deepfake of Joe Biden announcing that the U.S. will be joining the war in Ukraine

This video is far from perfect and it has a pretty audacious narrative too which makes it less believable.

But before we dismiss the threat of fake videos like this here are four points worth noting.

1. Expertise will spread

As with any new technology, the knowledge and expertise in this community will only increase. More experts mean more and better content.

2. The barrier to entry will decrease

The barrier to entry will only get lower as the technology becomes easier to use. What currently takes a week or so to make like this will eventually take hours or minutes.

3. The vulnerable are already at risk

Not everyone is equipped to identify even the poorly made fake content today, the elderly being one example of this.

4. Deepfake strategy will mature

The tactics behind this fake content will mature. I think we’ll see fewer headline-grabbing videos like the Joe Biden deepfake and instead see a surge of more insidious, low-key fake content.

We’ve been through this before

When social media became mainstream back in 2010 the door was opened to any group with modest resources to influence public opinion at mass scale, not that we knew it at the time. AI-generated content will take this to a whole new place.

Fake news is as old as news itself

Fake news is nothing new, but thanks to AI the ability to generate and share it is now easier than ever. As long as the right training data and instructions are in place AI-driven content like pictures videos even research papers can be created and distributed with very little human input. This means more content and increased credibility.

For instance, take this fake post from 2019 that was viewed around 24 million times.

A fake post (with real photography) from 2019 that was viewed around 24 million times

Now imagine multiple versions of the same fake story being released at once each with different titles, different images, and different videos, being posted from different sources. This would create the illusion of widespread coverage, making it appear much more credible than the fake news of the past.

An example of a fake post generated with Mijourney (image) and ChatGPT (text)
An example of a fake post generated with Mijourney (image) and ChatGPT (text)

And the side effect of good fake content becoming mainstream is that all content will lose credibility, anything can be labeled as fake and this in my eyes is the real threat.

Okay, enough about the media let’s look at how this technology can be used against everyday folk like you and me.

How else can this be used?

It’s not just public figures they can be faked, this reporter created a model of his own voice and can be seen here using it to access his online banking.

Deepfake audio scams are already happening today. In 2020 a bank manager in Hong Kong was tricked into initiating wire transfers worth 35 million dollars. Deepfake audio was used to impersonate one of the bank’s directors over the phone, this was combined with faked emails from that director that backed up what had been said.

Okay, let’s rapid-fire through some other use cases.

Manipulating financial markets

On February 3rd 2022 meta lost 232 billion in market value. Why? A lackluster earnings report sent investors reeling, this type of announcement could easily be faked to influence stock prices in the near future.

In 2022 Meta lost 232 billion in market value due to a lackluster earnings report

Evidence tampering

What happens when fake content reaches a court of law? This has already happened in the UK where a client of expatriate law was accused of threatening his wife over the phone, and a fake audio clip was submitted as evidence during a family court trial.

She’d manufactured an audio file which appeared to sound as if the father of this child had said something that he never said. This is the really interesting bit, so she did it, she’s not an expert at all in these things but she managed through an online company with their assistance to edit an audio file. And the interesting bit is the data that she used to cause the edits was that she had hundreds of hours on her phone of footage of him speaking and so …

Scapegoating

Back in 2009 this audio clip of Christian Bale having an onset meltdown went viral.

If this were to happen today and someone claimed it was a deepfake it would be entirely plausible. The Cry of deepfake might become a common scapegoat moving forward.

Coordinating violence

It’s not clear if what Donald Trump said caused a mob to storm the U.S capital in January 2021 but one thing is clear, being able to mimic influential figures at times of high tension could have real effects on human behaviour, especially when large groups of people are in the same place at the same time.

A scene from the capitol riot in January 2021

Deepfake porn

Deepfake porn is already a huge problem, it’s responsible for the majority of deepfake video found online today and will become a growing concern for any women who have a presence online.

Fringe groups

Fringe groups like anti-vaxxers are no strangers to using misleading posts to push an agenda. These posts will become much more potent when subject matter experts or grieving parents can be faked.

Some recorded examples of misleading posts pertaining to the Anti Vax movement

So, what can we do about this?

Okay so what can we do about this? Surely there’s a way to detect and suppress fake content. Well, it turns out we have a good case study for this type of challenge, all we need to do is to look to the world of malware.

Introducing, Malware

Brain, released in 1986 was the first in the wild computer virus, then came antivirus software such as McAfee.

A screenshot of the boot screen of a computer infected with the now famous “Brain” virus of 1986

This led to an arms race between virus creators and security researchers with each side trying to outsmart the other. This back and forth played out over the last few decades, as malware evolves anti-malware software evolves with it.

The catch here is that virus creators always have the advantage, they always make the first move. All security researchers can do is wait until a new virus has been identified and then account for this new virus in their systems.

This is likely how the Deep fake battle will begin, first deep fakes become good enough to trick our eyes and ears, from there it’s a back and forth between deepfake creators and deepfake detectors.

A deepfake showing Burt Reynolds starring in James Bond (right), with the original actor Sean Connery (left).

Detecting deep fakes with an algorithm

Facebook held the Deepfake detection challenge in 2019 and awarded a total of US 1 million to the top five entries. Participants built a detector model training it on a data set of 100,000 deep fake videos. More than 35,000 models were submitted and the winning model achieved an accuracy of 82% on the training set of videos. It then achieved an accuracy of 65% on a data set that it had not been exposed to.

The difference between these two numbers highlights the problem with training data, we can only train our detection models with videos that we know are fake. In other words, we don’t know what we don’t know.

The difference between these two numbers highlights the problem with training data, we can only train our detection models with videos that we know are fake. In other words, we don’t know what we don’t know.

Now what?

So if an algorithm-based detection is not the whole answer then what is? The ability to verify media files via independent sources, record and confirm file origins, and even apply digital watermarks could prove effective, all of which could be baked into our media platforms. Also, we’ll likely see an increased demand for reliable and independent journalism with news organizations investing more into verifying content and debunking false information.

We’re living through a digital revolution, so many things that were once science fiction are now real, whether you find this exciting or unnerving one thing’s for sure, the speed of change is only increasing.

I want to leave you now with a deep fake of Richard Nixon reading the speech that was prepared in the event of a failed Apollo 11 moon mission.

This speech was never read.

If you enjoyed this read, then check out the video version below, it’s full of information that does not translate well to written format.

Watch the full, video version of this article here, packed full of visual examples.

--

--

Brad Langdon

Hi, I'm Brad. I make videos about the intersection of technology and culture. Take a look for yourself https://www.youtube.com/@bit_culture