Can We Stop Deepfakes?

While a lot of self-purported tech experts are claiming that Chat-GPT3 will rename itself to Skynet and start annihilating humanity, a problem much closer to home is plaguing humanity: Deepfakes. It seems that — once again — humanity’s biggest threat is itself. In this article, I’ll dive into what the problem is, and what (if anything) can be done about it.

Deepfakes are a Problem that Affects Everyone

While it can be easy to roll your eyes at your grandpa’s livid Facebook comment below an obviously AI-generated photo of their favorite political candidate, it’s important to realize there are more dire threats facing deepfakes:

  • Voiceprints can allow for bad actors to impersonate you. Maybe you were going to take on a project for a client. But then “you” called the next morning to cancel it.
  • Revenge porn takes on a whole other level if it’s not even necessary to provide an unclothed version of a person anymore. You could have your whole reputation ruined by a kid with your LinkedIn profile picture and a text-to-image app up.
  • “Puppet-mastering” is the ability to make someone’s face match an unrelated block of speech. Imagine a recorded Zoom call where you were perfectly sober, but now it’s proven that you’re drunk (even if it is a cheapfake). Since none of your zoom meetings are private, it’s good reason to stop using Zoom.

Deepfakes in and of themselves — Are not Evil

Now, before I go any further, I’d like to reiterate that deepfakes in and of themselves are not evil. There are many noble uses of deepfakes that do a world of good:

  • Recreating a deceased celebrity in a film or music video.
  • Keeping someone’s identity hidden in online videos while still making them friendly, especially in a Big Brother era.
  • …Or…just for entertainment’s sake.

What’s most damaging is the intention behind deepfakes. Obviously, with the image above, the intention wasn’t to ruin Sony Pictures’ reputation by convincing people Ron Swanson played Wednesday Adams; the point was amusement.

But not so in a lot of deepfake cases.

Damaging Intentions of A.I.

The intentions of damaging A.I. (or deepfakes, that I’ll refer to as DAI) tend to fall into 2 camps: disinformation and misinformation — disinformation much more so (and there is a difference!).

Misinformation would be someone see a video of Nanci Pelosi drunk intended to be used for humor, then send it to The Blaze as proof that Speaker Pelosi doesn’t belong in the White House. While damaging, misinformation is (for the most part) unavoidable because intent can be taking out of context all the time.

The bigger threat is disinformation. Even before the rise of AI, the threat of disinformation was threatening to affect the US election.

I’m intentionally leaving this section unanswered, and you’ll see why in a second. Before we can understand how to stop deepfakes, we must first understand the reason behind them.

Detecting and Stopping DAI

As long as DAI has existed, there have been people trying to stop or eliminate it.

Khalid Malik and his team provide a dense summary of all of the (known) methods of deepfakes, and ways around them. These include:

  • Audio/visual Face-swap, detected by either using Speeded Up Robust Features or by feeding the frames through a neural network
  • Lip-syncing, detected by identifying features or detecting inconsistencies
  • Puppet-master (e.g. making someone say something they wouldn’t normally); detected by finding inconsistencies like eye color, or by employing deep learning to find what constitutes a “puppet-mastered” video.
  • Face synthesis, detected by by comparing against known synthesis samples.
  • Audio deepfakes, detected by comparing against known real samples.

So, now we have deepfake detection down. What about stopping them?

Well, this is a bit of a challenge (at least in the States) because of freedom of speech. Unless it’s proven that a deepfake is defamation or a threat to a civil liberty (equivalent to calling “fire” in a crowded theater), it’s up to the platforms to know how to handle it.

Law isn’t the only issue — technology is also a limiting factor. The University of Kent published a paper on the various means of detecting deepfakes, and — in the end — concluded there are many issues with detecting DAI — most notably speed and accuracy.

If 1,000,000 DAI images are generated a day and a system detects detects 99.9% of them, there are still 1,000 images spreading disinformation.

And if I can make a fake news image in Midjourney in 30 minutes to publish to Twitter and it takes 2 hours until the image passes through the queue on the site to be scanned, then it’s really an arms race to take down DAI.

But what if the solution isn’t a technical solution. And what if’s not as clean as we’d like?

A Deep-Dive into Deep Porn

What’s often overlooked in deepfakes is the area of pornography (not sure if Medium flags these keywords, but I’ve seen it’s the hip thing to do).

One of the most comprehensive studies on deepfake porn was done by Dilrukshi Gamage. In the paper, she analyzes the /r/deepfakes subreddit, which was taken down in 2020, though communities still flourish elsewhere.

In it, she dives into the moral data to identify what drives the community of /r/deepfakes. Without getting into too much detail (but it’s a fascinating read, so I’d recommend downloading the paper!) she found that the strongest virtue and vice people held was Loyalty/Betrayal, and the weakest was Authority/Subversion.

It does make sense if you think about it. Most of the comments mentioned in the paper (“Anyone want to deepfake nude your crush or anyone?”, “You can do deepfake using images, no need nudes of yours, this can be generated”) have a tone of “we’re all looking out for each other to do this.”

This does sound terrible considering it’s porn, but keep in mind we’re all human. We’re all complex. We all do stupid things that are immoral. And at the same time we all have virtues. This paper by Gamage only shines the light on where the ethical framework of deepfake creators lie.

A Path Forward

So perhaps, just like with school shootings or homelessness or police brutality, I don’t think the answer to DAI is simple. I don’t think it’s something we can brush aside and not deal with it, either.

As it stands, I think the most powerful solution is a no-tech solution.

This is why it’s taken so long to write this article. The way people are handling this problem is frustrating me. Instead of asking “why?” we’re all asking “how?” Instead of asking “why are people generating deepfakes,” we’re immediately jumping to “how can we stop it?” If we do that we’ll never win the war against DAI. The only paper that came close to answering the why was was Gamage’s paper on the deepfake subreddits. If we can understand why deepfakes are spreading, we can solve the problem behind the problem, and limit the power of DAIs.

This begins by asking questions.

On the one side, if you have a friend who’s making deepfakes, seek to understand why he’s doing it. If it’s for political gain, dive into that deeper. Or if it’s deepfakes of Emma Watson nudes, see if he’s not truly accepting himself as a beautiful human being. Whenever someone is causing pain to others, it’s usually because they themselves have unresolved hurt. And — if your friend is doing something illegal — do the right thing and report them.

Then there’s another side to the battle — the receiving end of DAI. This one may be trickier. Obviously, whenever you see a picture on social media, check the source — especially if it activates an emotional response. If you find it doesn’t pass the sniff test, put a comment underneath the post. If you’re part of a deepfake porn community, perhaps it’s time to go elsewhere — maybe outside on a walk, or (while I don’t condone porn in any fashion), consider instead going to sites where actors willfully disrobe — it may not give the same thrill as subverting someone, but it’s more ethical.

Conclusion

This article does deviate from my coding or software articles, but I think it’s important. As a tech lead, I know I’m here not only for building products, but for making decisions on the effect the software product has on customers and clients.

These are things we must all consider as we go throughout our day. If we go too far it can cause crippling anxiety (Oh no! I stepped on an ant!) but if it never crosses our mind, we’re not truly serving our society as great men and women are called to do.

While this article doesn’t provide any definitive answers, I hope it gave you something to ponder. These are big issues, and it’s not something any one person can solve.

But it is something we can all solve together.

👉‍‍ Share this article with 3 of your friends or colleagues. On Twitter, LinkedIn or Mastodon. Be sure to tag me in the post. Helps me know if my content is still relevant.

📢 Comment below: Have you heard of a solution to stop DAI? Is the only solution low-tech?

💓 Subscribe to DamnGoodTech on Ko-Fi for as little as $7/mo. Get articles 3 days early and get a shout-out on each article! That’s like hiring a team lead for your software organization for way less than minimum wage. 🙏🏻 Special thanks to James N, Lucy R, and Steve O for your support.

💼 Hire me as a Tech Creation Lead on your team. I have over 10 years development experience and would love to help your team reach their full potential. Head on over to https://damngood.tech/pages/schedule.html to schedule a free consultation.

--

--

Jordan H (Principal, Damn Good Tech) #openforwork

Senior Full Stack Developer & Tech Lead (#openforwork)