What’s C2PA and Why It Matters in an AI-Driven World?

DuoKey
3 min readAug 27, 2024

--

In a recent Q&A at Stanford, former CEO and Chairman of Google, Eric Schmidt, highlighted AI’s massive potential to reshape our world. While Schmidt praised the extraordinary new capabilities emerging from this technology, we have only scratched the surface in understanding the social impact of the use of AI for generating deceptive, fake or fraudulent media content.

The ability to generate highly realistic fake content at scale, commonly known as deepfakes, is blurring the line between what’s authentic and what’s not, posing a significant threat to all aspects of society.

The recent $25 million scam, where AI technology was used to mimic the voice and appearance of the counterpart in a video call, is illustrative of the serious risks and repercussions of AI-generated content.

Fighting deepfakes with advanced encryption

This trend becomes even more troubling as AI tools like deep-live-cam, which allows anyone to swap faces with celebrities in real-time, become more mainstream. The ease of access to these tools foresees a flood of misinformation, disinformation and deepfakes. The World Economic Forum (WEF) has named misinformation as one of the most pressing global risks over the next five years in its Global Risks 2024 report.

The Need for Authenticating AI-generated content

Against this gloomy future, there is an urgent need for ways to authenticate fraudulent AI-generated content, particularly images and videos.

A promising initiative in this area is C2PA — the Coalition for Content Provenance and Authenticity.

C2PA is a collaborative effort led by leading tech companies such as Adobe, Microsoft, Intel, and others. Its goal is to develop a unified technical standard that allows for the certification and authentication of digital content, aiming at building and maintaining trust in the digital realm

The key objective of C2PA is to combat misinformation by providing a transparent and secure system that allows users to verify the authenticity of digital media. By embedding metadata directly into the media content, C2PA enables viewers to trace its provenance, giving them confidence that what they are seeing is legitimate.

Securing Media Authentication

As C2PA continues to grow, cloud security company like DuoKey are stepping in to provide technical support for these standards.

DuoKey offers a secure and scalable Key Management System (KMS) to enhance the integrity and security of digital signatures, which is critical for ensuring that digital media and its metadata (including its provenance and history among other data) can be trusted.

Content AI verification enabled by DuoKey advanced encryption key management

DuoKey allows organisations to easily manage and scale encryption keys used for digital signatures and simplify the complex process of verifying content integrity, making it feasible to deploy such standards on a global scale.

“With DuoKey, digital media can be signed at creation in a secure and robust way, making it much harder for bad actors to manipulate or forge content” explains Nagib Aouini, CEO at DuoKey.

Conclusion

As AI continues to advance, so do the risks associated with deepfakes and misinformation. The line between what’s real and what’s fake is becoming increasingly blurred, posing serious challenges for society. Initiatives like C2PA offer a promising solution by providing a framework to authenticate digital media, ensuring that users can trust what they see. Supported by technologies like DuoKey, this initiative brings us closer to a world where AI-generated content can be safely and reliably managed.

References

--

--

DuoKey

DuoKey Key Management Service is based on innovative Multi-Party computation (MPC) that provides advanced encryption services without relying on HSM