Wait…Shouldn’t We All Be Investing in Deepfake Detection? An Anti- Generative AI Market Map

Elizabeth Peng
Charge VC
Published in
5 min readApr 24, 2024

tl;dr: This article explores the deepfake detection / “anti-AI” security market landscape and 3 opportunities for builders and investors.

Deepfakes have been a problem for a while. Not to be all “old man yells at cloud,” but we at ChargeVC have been writing about this since 2020. Most recently, an employee wired $25 million to scammers impersonating his CFO over video conference. Video conference!

Chilling.

Deepfakes are about to explode in number and sophistication, especially because new generative AI video, audio, and image tools make it easier than ever before to generate and manipulate content.

What’s interesting is that most VC’s don’t seem to be paying much attention to the deepfake detection and anti-AI security space. More than $2.7 billion has been invested in consumer generative AI content tools, but only $500 million in deepfake detection (Pitchbook). That’s surprising, given deepfakes can cost companies millions, and according to one study, fake news cost the global economy $78 billion in 2020. Are investors right?

Maybe deepfake detection tools simply can’t keep up, so we should just make creators and publishers embed provenance data and call it a day. That’s what C2PA, a joint effort among Adobe, Arm, Intel, Microsoft and Truepic, aims to do with its new technical standard.

To dig more into this, I looked into how startups and incumbents are fighting deepfakes (market map below):

There’s 3 major ways that players are addressing deepfakes:

Method #1: Detection tools use various techniques to determine whether an image or video has been manipulated or created by AI. Some of these companies, like BioID, Clarity, and Kroop, use AI models trained on real and fake images to spot the differences.

Others identify specific signs that images, videos, and audio have been manipulated. For example, Intel’s FakeCatcher analyzes patterns of blood flow to detect fake videos. DARPA’s Semantic Forensic project develops logic-based frameworks to find anomalies, like mismatched earrings. Startups working on this include Attestiv, DeepMedia, Duck Duck Goose, Illuminarty, Reality Defender, and Resemble AI.

Intel’s FakeCatcher technology (source: 80.lv)

ID verification tools are a subset of detection tools built to authenticate personal documents and user profiles. They often combine image analysis with liveness detection (i.e., when you’re asked to take a selfie or make a face). AuthenticID, Hyperverge, Idenfy, iProov, Jumio, and Sensity are some of the companies in this space.

Of course, detection-based approaches are inherently retroactive, so they have to constantly keep up with evolving generative AI models. But many of these tools have 80%+ accuracy rates, compared to only about 60% for humans.

Method #2: Certification tools, on the other hand, proactively embed provenance data into image and video files, with a record permanently stored on a blockchain. Truepic allows enterprises to add, verify, and view C2PA content credentials, including at the point of capture on smartphone cameras. Similarly, CertifiedTrue allows users to capture, store, and certify photos for legal proceedings. This information is then recorded on a blockchain, which makes it permanent, public, and unalterable.

The upside is that we’re beginning to establish a standard for content authenticity; the downside is that these programs are opt-in. Authenticating all or even most of the content that exists and will be generated will be a major challenge, though some camera makers, like Canon, are working on embedding authentication at the point of capture.

However, with the proliferation of deepfakes, the paradigm is shifting from “real until proven fake” to “fake until proven real”. Authentication at the hardware level will likely become the only way to prove humanity, since publisher- or social media-level authentication only proves where content first appeared, not whether a human made it.

Method #3: Lastly, narrative tracking platforms examine how fraud and disinformation spreads through the web, keeping corporations and governments informed of high-risk narratives. This is a bigger-picture approach to fighting deepfakes that tracks the spread of misinformation online and verifies content by examining it in context.

Players include startups like Blackbird.AI and Buster.AI, as well as public-private partnerships like the EU-funded project WeVerify. For example, large companies use Blackbird.AI’s Constellation Dashboard to track online narratives, which are given risk scores, so that they can mitigate misinformation.

All of us using deepfake detection tools.

There’s not a single tool or strategy that can completely protect against the impact of deepfakes, so individuals, enterprises, and governments will have to rely on a mix of solutions. There’s certainly room for entrants in the deepfake detection and anti-AI security space.

Here are some key opportunities for builders and investors:

  1. Selling content moderation models to internet companies, like Hive does to social media platforms, dating apps, and online marketplaces, may be a white space for deepfake detection companies. These companies already have detection models that they’re packaging to sell to enterprise and federal security teams, and social media companies in particular face challenges with manipulated media. The default for content moderation today is outsourcing to low-paid human moderators.
  2. More blockchain-based certification tools, especially those that allow consumers to authenticate their own content. Large media companies (like Fox and the C2PA consortium) and hardware companies are working on authentication standards. But there’s 27 million paid content creators in the US alone and 10 million creative freelancers worldwide that have an interest in protecting their images and work. There’s also private citizen use cases, like countering revenge porn. Currently, other than simple watermarking, there’s no simple solution for consumers to authenticate their own content and record it on the blockchain.
  3. There’s an edtech angle. As AI generated data becomes more realistic, workforces, government employees, and private citizens need to supplement digital tools with education on how to spot fraudulent content. There’s great free educational content online, but I expect deepfake detection curriculum will need to be integrated into HR training and other educational tools.

There’s not a magic formula for defending against deepfakes. But with deepfakes causing financial and reputational harm to people, organizations, and governments, deepfake detection is an area to watch.

If you’re building in this space, please reach out! I’m at elizabeth@charge.vc or @eliza_berna on Twitter.

This post was written by Elizabeth Peng, MBA Candidate & Venture Fellow at Columbia Business School, and edited by Brett Martin, investor at Charge.vc. Elizabeth is a native Californian, fitness/wellness junkie, and professional contrarian, having started her investing career at hedge fund Elliott Management, where she served as board observer to Coveo (an enterprise AI search company), Gigamon, and Wrike.

--

--