Deepfakes in 2021 — How Worried Should We Be?

In a world so seemingly susceptible to false information, this is a threat we could probably do without.

Andrew Thirlwell
Predict
8 min readJun 9, 2021

--

Woman looking into a screen
Photo by Yoal Desurmont from Unsplash.

What is a Deepfake?

Before I go any further it’s probably worth establishing what a Deepfake is and isn’t. Added to the Collins dictionary in 2019, they summarise the term:

A technique by which a digital image or video can be superimposed onto another, which maintains the appearance of an unedited image or video.

The term is often misinterpreted, and that’s potentially as a result of definitions like this.

The concept of manipulating images and video in this way is certainly not a new concept. Visual effects artists working on Hollywood films back in the ‘90s would probably describe parts of their job as something very similar to this. More recently the Fast & Furious franchise achieved this exact result, allowing the late Paul Walker’s character to complete his story arc by using a few body doubles, CGI, and an immense amount of skill and effort from the film’s Visual Effects team. Is that a deepfake? I’d argue that it’s at least not the phenomenon most of us are referring to or are worried about, however it is interesting to consider the differences.

Firstly, we know it’s fake. We go to the cinema to be knowingly deceived. We want to get caught up in a world that is not our own, believe the story that we are being told.

More importantly, there is a vast amount of effort, skill, and money that is invested to create such a believable effect. For an average film, visual effects can eat up anywhere from a fifth to half of the overall product cost. It is not unheard of for a single scene to rack up a bill of well over 50,000 dollars. Money is of course a key factor, but credit should be given to the incredibly talented people who have dedicated their lives to the magic of visual trickery.

Enter Reddit User /u/deepfake

In 2017 we saw a monumental shift in the landscape of fake media and marked the origin of the term and process we would now call a “Deepfake”. A Reddit user began publishing pornographic videos with faces swapped with those of popular actresses (allow me to briefly overlook the highly questionable ethical implications of this for just a second). Although not perfect, the videos showed a staggering improvement in what we thought was possible for a single person to create. Alarmingly, this individual did not need to toil endlessly to stitch together each frame. Instead, they utilised Artificial Intelligence, specifically Neural Networks to mimic facial expressions and positions.

It did not take long for these videos to gain traction, and with the release of the software behind them, new videos were quickly finding their way into our newsfeeds. Crucially, these programs did not require any real expertise and were openly accessible to anyone who had a computer and a set of training images. The flurry of these early videos caused an outpour of chaotic and frantic headlines predicting the truth doomsday along with the imminent global uprising and destabilization it would cause.

“You thought fake news was bad? Deep fakes are where truth goes to die” — Oscar Schwartz, The Guardian

“BE AFRAID, FAKE NEWS IS ABOUT TO GET EVEN SCARIER THAN YOU EVER DREAMED” — Nick Bilton, Vanity Fair

“Deepfakes Are Going To Wreak Havoc On Society. We Are Not Prepared” — Rob Toews, Forbes

Deepfakes in 2021

Sitting here in 2021… I repeat I am indeed sitting here in 2021. The predicted deepfake-enabled chaos has not (as of yet) caused global destabilization and unrest. For the vast majority of people, we scarcely hear the term muttered anymore, certainly not anywhere near the scale predicted by some, so what has happened so far, and were the headlines warranted?

Deepfake Pornography

Unsurprisingly, malevolent deepfake usage similar to that of its origins is rife. New ways of using deepfake technology have also been developed, with yet another disturbing glimpse into our potential future provided in 2019 by an anonymous developer who created the “DeepNude” app. This allowed users to take photos of clothed women, and then generate a photo with her clothes removed. It’s incredibly difficult to imagine that this app would be used for anything but non-consensual pornography and thankfully was removed within a day of its release.

It is estimated that around 95% of all deepfake videos are pornographic. Its prevalence is evident, as are the ever-growing ethical and legal debates surrounding its usage in this way.

Unfortunately, it seems like laws and legislation are still lagging behind with regards to this form of malicious and particularly non-consensual use. As is the case in much of the world, here in the UK there are currently no specific laws to tackle image and video manipulation in this way. Affected individuals currently rely on more general image rights and copyright laws to seek compensation and justice. We are however seeing a somewhat more positive response from large social media and other sharing platforms, with deepfake pornographic content being banned on Gyfcat, Twitter, and Reddit amongst others. However, for the powerless victims of deepfake pornography of any kind, it is clear that more action is urgently required on both sides.

The Deepfake Which Sparked an Attempted Coup

When deepfakes first surfaced, a large fear was not regarding the legibility of any particular video but the doubt left behind when you cannot prove if what you are seeing is the truth. This fear had its first dose of reality in the African country of Gabon.

Reports of president Ali Bongo being hospitalized began circling in 2018. After several months of radio silence and growing speculation and unrest over the controversial presidents’ health, a video was released which appeared to show Bongo addressing the nation. Some saw this video as proof that the rumours of ill health were unfounded. However, an ever growing number began to question the authenticity of the video, suspecting AI or deepfake technology was masking the truth.

Adding fuel to the already unstable situation on the ground, the video seemed to play a significant part in what would be a failed attempted coup just a week later. It was not until later that several experts would judge the video to be genuine, with little to no evidence that the video was anything other than an awkwardly scripted message from a man of ill health.

“The great enemy of the truth is very often not the lie, deliberate, contrived and dishonest, but the myth, persistent, persuasive and unrealistic” — John F. Kennedy

Nancy Pelosi’s Slurred Speech

A video began to circulate in 2019 seemingly showing the House Speaker, Nancy Pelosi, slurring her words. Compared to other such videos, this was an incredibly low-effort and shallow attempt at a fake, simply cutting the video to exaggerate pauses and stutters in her actual speech. However, regardless of the crudeness, the video was circulated widely by high-profile users on social media including Donald Trump. Although this was quickly debunked, alarm bells have rightfully been sounded at the thought of such a low-quality video gaining so much traction.

Tom Cruise on Tik-Tok?

There are 2 sides to every coin, and the good people of the internet have provided countless examples of comedic, entertaining, and mostly harmless uses of the technology. It has provided answers to age-old questions such as “What would FRIENDS have been like if every character was played by Nicolas Cage?”. These videos are incredibly popular, and it’s no surprise that we are now seeing an explosion of parody deepfakes on Tik-Tok. Most notably an account dedicated to a fake Tom Cruise with over 1.4 million subscribers (See the video below for an interesting dive into the behind-the-scenes).

Whether the well-intentioned use of deepfakes is still a step too far in the eyes of the public and in particular the celebrities involved, is up for debate. What we do know however is that without it, one smart Youtuber would never have put Sylvester Stallone’s face on Macaulay Culkin’s body and we could never have witnessed the masterpiece that is “Home Stallone’’.

Comparison to 2017

It’s incredibly easy to get caught up in the chaos which seems to surround eye-catching breakthroughs of any kind, and that has been especially the case with anything AI-based. Sudden leaps forward are common, but they are often short-lived and afterwards we find that progress grows on top of this, at a more mundane and predictable pace until the next breakthrough, often many years down the line. The term “AI Winter” was coined to describe this exact situation, and we’ve seen countless examples in the past which match the trajectory of the past 4 years of deepfakes.

In 2021 the deepfakes which an average person can create are noticeably better than what was possible in 2017. Images are often clearer, with fewer artefacts and blending issues. However, they are still far from perfect, with a vast amount of training data and effort required to create something which could trick the human eye.

We have also seen a similarly steady influx of researchers developing their own AI systems to detect manipulated images and videos. These systems have been shown to work incredibly well with even the most advanced manipulation which is available today. There is also no doubt that these detection systems will continue to advance, as more and more researchers begin to tackle the issue.

A Prediction for the Future

Even without any sudden leaps forward in technology, the continued steady improvement is likely to continue for the foreseeable future. As such, the confidence in what humans can perceive as real and fake will continue to shrink. Our eyes alone may not be sufficient to make an informed decision on what is real.

My inner optimist hopes that this will trigger a long-overdue shift by the average individual. To no longer mindlessly watch whatever pops into their feed, but to question where the content originated and whether they should trust it. As a result, we could see more emphasis and value put into being a reputable source of information, with additional support needed from social media giants who could favour content from reputable origins.

If the average person does not adapt, our last line of defence will be the AI detection systems and the researchers working on them. It’s unreasonable to expect that these systems will be perfect, and constant evolution will be required to keep pace with countermeasures on the other side. However, I remain hopeful that they will continue to be good enough to maintain a high entry-level for nefarious participation which could fool the world.

--

--