What you see is what you get?

About Deepfake in 2024

Alvira Frish
NI Tech Blog
7 min readMar 12, 2024

--

Generated via Midjourney

​​”Falsehood flies, and the Truth comes limping after it”.

If you search for this quote, you will find a very similar, almost identical version attributed to Sir Winston Churchill. However, upon searching for context, you won’t find an answer. Instead, you will find that the quote is attributed to Mark Twain. When he said it and why? No answer. Churchill and Twain are both credited for something they never said.
The original quote, as cited above, belongs to Swift. Jonathan Swift. The Anglo-Irish writer, best known for Gulliver’s Travels, well described already in 1710, one of the significant threats in the media, politics, and cyber — fake, which in 2024 is reaching its peak with Deep-Fake.

Flash forward, another Swift, Taylor Swift becomes a casualty of a deep fake scandal.
In January 2024, Swift’s sexually explicit AI-generated images were spread across the platform X (formerly Twitter). Since the fastest animal in nature is the fake news beast, one of the posts on X was seen 47 million times(!) before being removed.
The fact that Taylor Swift was the victim, made it a significant incident, receiving the response from the White House (“alarming”), Microsoft CEO Satya Nadella (“alarming and terrible”), SAG-AFTRA (“harmful and deeply concerning”), and fanbase (#ProtectTaylorSwift flood).

Ready for it?

So what is this “Deep-Fake” that is considered as the new fraud area?
According to U.S. Government Accountability Office (GAO), a Deepfake is “a video, photo, or audio recording that seems real but has been manipulated with AI. Deepfakes can depict someone appearing to say or do something that they in fact never said or did”.
We are talking about not-so-new technology that evolved in the AI area, which improves our lives.

Yes, deepfake can be beneficial. How?

  1. Marketing field — better, smarter visualizations with a new level of user experience.
  2. Legal and investigations — digital images can enhance the forensics field.
  3. Art — creative effects in movies and video clips. Already in use in Hollywood.
  4. Communication and education — need a museum instructor speaking Polish? Not a problem.
  5. Awareness — spreading important issues in the world with well-known celebrities in multiple languages can make engagement all over the world. For example: David Beckham’s “Malaria must die” petition.
  6. Accessibility solutions — technology can help to improve accessibility tools with individual voices and looks.
Generated via Midjourney

I Knew You Were Trouble

Similar to almost any other new technology in history, it was only a matter of time before the bad guys arrived.
They started using deepfake but instead of fighting Malaria, their target is different — disinformation.
According to the Merriam-Webster Dictionary, Disinformation is “false information deliberately and often covertly spread in order to influence public opinion or obscure the truth”.
The disinformation can be spread by bots as well as humans at light speed.
After the rise of ChatGPT in 2023, it looks like 2024 belongs to AI images and photo generators.

So they are lying — why is it (so, so) bad?

  1. Fake news — the number one challenge today in the media and politics areas. Deepfake can create a misrepresentation and influence elections for example. Only recently, a fake Joe Biden caller urged New Hampshire voters not to vote.
  2. Psychological weapon — specificity in wars and terror attacks
  3. Undermine public trust — if we can’t know what is true and what is not, how can we handle anything we see or hear? This is a pathway to chaos. From the “earth is flat” theory to “Australia does not exist” (shared by 200k people on Meta).
  4. Humiliation, including fake pornography.

Today, deepfake tools are becoming increasingly accessible, at no-so-high costs, and a lot of money is being invested in them.

Generated via Midjourney

Look what you made me do

Before we step into the future solutions, let’s think for a second — what would we require from our future protective anti-deepfake tools? It’s complicated.

Fighting deepfake, first of all, should be fast — once the deepfake picture is uploaded to the social media platform it is a matter of seconds until it will be spread all over. The future solution should be capable of dealing immediately with deepfake and not with a 2–3 days delay.

Let’s say the solution will be immediate; we will recognize the deep-fake pictures and with the collaboration of the social media platform, it will be blocked. Is it a good idea? the post will be blocked but it still generates traffic because users will be curious, and everyone will talk about it. Another option that is considered is to limit the traffic.

We are talking about huge amounts of data. Even if we decide to check each photo for deepfake signs, how much time will it take? Now, think about hundreds of millions of pictures and videos. Our future solution should know how to handle large amounts of data.

What is the solution?

Deepfake might increase in 2024, but we still don’t have a formal, validated solution.
These days we can already see the rise of the good guys, who want to fight the “bad” AI.
It might be some approval process for a legitimate source — such as a watermark or some kind of stamp.
It might be tools to scan the truth — startups are already raising money and working hard.
At some point, we will need the efforts of the social media platforms. Also, the academia will step in with its own findings in this field.

Remember the flying falsehood from the start of this article? So if the truth is limping, the regulation is starting to mumble from its sleep. Swift’s case prompted the regulator to begin speaking. This is a necessary step that will create a significant headache for the good and the bad guys.
Last February, amendments to the Trade Regulation Rule on Impersonation of Government and Businesses were proposed by the FTC. The amendments will include individuals (added to government and business) as forbidden to be impersonated by others. This is coming following an increase in complaints of impersonation fraud and the rise of deepfake, in parallel.
This update is added to the “No AI Fraud Act” (or: used my face or voice for your AI-generated content without my permission? I have IP rights) and the “DEFIANCE Act” (or: compensation for deepfake victims).

Chandler: Must be a virus. I think it erased your hard drive.

Ross: What? Oh, my God. What did you do?

Chandler: Someone I don’t know sent me an e-mail and I opened it.

Ross: Why? Why would you open it?

Chandler: Well, it didn’t say, “This is a virus.”

From: Friends, The one in Barbados, part 2, S9E23, 2003.

Is it over now?

What should we do? First — relax. Deepfake did not start a month ago when a finance employee transferred 25M dollars after allegedly talking with the CFO on Zoom. In 2017 it was the US president at that time, Donald Trump, who showed his Mandarin skills in a deepfake video. Or in 2020 when the president of Ukraine, Volodymyr Zelenskyy allegedly surrendered to Russia — the war is still going on as you probably know.

Deepfake is not a new mysterious virus. The cyber community is familiar with it. While well planned deep fake attacks like those above, are not common, we do have other, not less dangerous threats in our cyberspace that can impact us — phishing, vishing (who needs deep fake when I can just pretend to be your newly recruited bank clerk), smishing etc.
In our private and working lives, we must find the balance between awareness and peace of mind. If we are afraid to open any email we receive — this is not a way to handle the threat.

Instead, we should make sure we are familiar with the main red flags for social engineering attacks — for example, for the most common vector attack, phishing emails: be aware of domain name, unknown/unfamiliar sender, urgent language, spelling/grammar mistakes, request for information, download files, click on links (remember to hoover!), and poor graphics
Most of these red flags can be adopted for other types of social engineering including, deepfake. A Nigerian prince promised you his grandpa’s inheritance? Your CEO asked you to buy him/her 1000$ gift card ASAP with bold capital letters? It doesn’t matter if they emailed you, called you, texted you, or made a deepfake video meeting with you. Stop, take a breath, and double-check this.
Be aware of these kinds of scams and always check the source if it looks suspicious.

Generated via Midjourney

Remember, sometimes the challenge is not in identifying the attack but in reacting appropriately (reporting to Security team, avoiding clicking links, downloading files, and giving sensitive information, etc.). Attackers try to target you usually in the middle of the day when you are busy or at the end of a day when you just want to go out of the office and the “IMPORTANT — your user is blocked please click here” email will catch you unfocused.
Protect yourself by, for example, using MFA (multi-factor authentication) on social media, and ensuring you have backups for your photos.

Last but not least — social responsibility. Do not take part in this cyber crime and do not pass on and spread any fake news or deepfake photos. Received an allegedly intimate picture of a celebrity? A message about an identified casualty of war? Don’t read, don’t watch. Only report.

In sum, deepfake is here to stay, for good and for worse. As with every new technology, we should use it wisely while protecting ourselves from the threats it might cause. As time passes we will see more clearly the need to fight the bad deepfake and it will be done by a combination of smart tools, social media platform cooperation, regulation, and our responsibility. Since only the last one is in our hands, we can start our part immediately.

--

--