Navigating the Deepfake Dilemma through Regulation, Awareness, and Ethical AI Practices

ReadyAI.org
ReadyAI.org
Published in
4 min readMar 1, 2024

By: Rooz Aliabadi, Ph.D.

Recent developments in the fight against deepfakes have brought positive momentum. The United States Federal Trade Commission recently finalized regulations prohibiting deepfakes imitating individuals. Moreover, prominent AI startups and major technology firms have announced their voluntary commitments to fight against the misleading applications of AI in the 2024 elections. Additionally, just last week, a coalition of civil society organizations, such as the Future of Life Institute, SAG-AFTRA, and Encode Justice, launched a new initiative advocating a prohibition on deepfakes.

I genuinely believe that efforts mark an excellent beginning and enhance public consciousness — yet the intricacies will prove crucial. Current regulations in the UK and some US states have already banned the production and distribution of deepfakes. The Federal Trade Commission aims to criminalize the generation of content by AI platforms that mimic individuals, granting the agency authority to compel fraudsters to reimburse the earnings acquired from such deceptions.

However, a significant barrier still needs to be addressed: implementing outright bans may not be technically practical. It’s more complex than pressing an on-and-off switch.

Big Tech often faces criticism for the damage caused by deepfakes. Yet, it’s worth acknowledging that some of them make efforts to employ their content moderation tools to identify and prevent the creation of, for instance, deepfake pornography. (This is not to suggest they are flawless. The deepfake pornography involving Taylor Swift was allegedly produced by a Microsoft system.)

A more critical issue derives from the fact that multiple damaging deepfakes are produced using open-source platforms or by state entities and are spread through end-to-end encrypted services like Telegram, making them untraceable.

Regulation should address each participant in the deepfake production chain. This could involve holding entities of all sizes responsible not only for the creation but also for the dissemination of deepfakes. Therefore, platforms known as “model marketplaces,” like Hugging Face or GitHub, might have to be considered in discussions on regulation to mitigate the proliferation of deepfakes.

This is important because Model marketplaces facilitate the availability of open-source models like Stable Diffusion, enabling individuals to create their own deepfake applications. These platforms are proactively responding. Hugging Face and GitHub have implemented strategies that introduce blocks to the methods people use to access tools and produce detrimental content. Hugging Face is also an outspoken supporter of OpenRAIL licenses, requiring users to adhere to specific usage guidelines. Moreover, the company offers the option for users to seamlessly incorporate provenance data that adheres to strict technical specifications into their processes.

Approaches encompass enhanced watermarking and content provenance methods, aiding identification should also be considered. However, these detection mechanisms could be better solutions.

Requiring that all AI-generated content be watermarked is unenforceable, and some watermarks could have the unintended effect of facilitating rather than restricting misuse. Specifically, within open-source environments, watermarking can be avoided by malicious parties. Given the open access to the model’s source code, certain users can eliminate undesirable means.

Enforcing regulations that mandate watermarking for all AI-generated content is not feasible, and there’s a real risk that watermarks might counteract their intended purpose. In open-source systems, for instance, nefarious individuals can easily strip away watermarking and provenance methods. This vulnerability stems from the universal accessibility to the model’s source code, allowing certain users to discard any safeguarding measures they prefer not to have.

Should only the largest corporations or the most widely used proprietary platforms implement watermarks on their AI-generated content, the lack of a watermark might inadvertently signal that AI does not need to be content.

Applying watermarking to all content could inadvertently validate the most damaging material emerging from systems beyond our control.

Deepfakes represent yet another manifestation of our challenges with information and misinformation on social media. They might catalyze regulating these platforms, encouraging a movement toward genuine public awareness and transparency.

In light of the complexities and challenges outlined above, it is imperative that we also address the broader implications of deepfake technology and its impact on our society. This conversation perfectly aligns with Big Idea #5 of the 5 Big Ideas in Artificial Intelligence as outlined by AI4K12.org, which focuses on the societal impact of AI. The emergence of deepfakes is a stark reminder of the dual-use nature of AI technologies — they can be utilized for both beneficial purposes and malicious intents. The advancements in AI that allow for the creation of highly realistic deepfakes pose significant ethical and societal challenges, including threats to privacy, security, and the integrity of information. As such, our response to the proliferation of deepfakes cannot be limited to prohibitions or technical fixes alone.

We must engage in a multifaceted discussion encompassing the legal and regulatory frameworks and the ethical considerations of AI technology’s impact on human life. This includes fostering public awareness and understanding of the issues, promoting transparency in developing and deploying AI technologies, and encouraging responsible AI practices across all sectors. By integrating these considerations into our approach, we aim to mitigate the negative impacts of deepfakes while harnessing the positive potential of AI for the betterment of society. In doing so, we reaffirm our commitment to navigating the complexities of the digital age with a balanced and informed perspective, ensuring that our technological advancements enhance, rather than undermine, the fabric of our society.

This article was written by Rooz Aliabadi, Ph.D. (rooz@readyai.org). Rooz is the CEO (Chief Troublemaker) at ReadyAI.org

To learn more about ReadyAI, visit www.readyai.org or email us at info@readyai.org.

--

--

ReadyAI.org
ReadyAI.org

ReadyAI is the first comprehensive K-12 AI education company to create a complete program to teach AI and empower students to use AI to change the world.