Teaching Digital Ethics in the Age of AI

ReadyAI.org
ReadyAI.org
Published in
7 min readJan 30, 2024

Addressing Deepfake Dangers with Lessons from the Taylor Swift Incident

Taylor Swift and Deep Fake

By: Rooz Aliabadi, Ph.D.

In an era where technology permeates every aspect of our lives, the rise of artificial intelligence (AI) brings profound societal implications, particularly in digital ethics and safety. As educators, addressing these issues head-on, especially within high school education, is imperative. The fifth of the ‘5 Big Ideas in AI’ focuses on the Societal Impact of AI, which has become increasingly relevant in light of recent events. A striking example of this is the widespread circulation of sexually explicit deepfake images of Taylor Swift. This case underscores the urgent need for awareness and education on the potential misuse of AI. This incident highlights the disturbing trend of using AI to create nonconsensual content. It serves as a wake-up call for the imperative role that education must play in preparing young minds to navigate and shape a world where technology and ethics intersect.

5 Big Ideas in AI

Recently, there was widespread circulation of sexually explicit deepfake images featuring Taylor Swift, a prominent figure in the global pop music scene. These unauthorized and nonconsensual materials gained traction online, attracting millions of viewers on the social media platform X, previously known as Twitter. In response, X has implemented stringent measures to mitigate the issue, including blocking all searches related to Taylor Swift.

This isn’t a first occurrence: deepfakes have existed for years. Nonetheless, the emergence of generative AI has significantly simplified the process of producing deepfake pornography and using AI-generated images and videos for sexual harassment purposes.

Among the various harms associated with generative AI, nonconsensual deepfakes disproportionately impact a substantial number of individuals, with women comprising the overwhelming majority of targets.

Fortunately, there is reason for optimism. Emerging tools and legislation may increase the difficulty for perpetrators to exploit individuals’ images, thereby aiding in the accountability of offenders.

Below are some ways we can employ to address nonconsensual deepfake pornography.

Watermarks

Social media platforms currently employ algorithms to scan and remove content that violates their policies. However, this process could be more consistent and overlook a significant amount of harmful material, as evidenced by the Swift videos on X. Determining authentic and AI-generated content presents a considerable challenge.

A potential technical solution involves the implementation of watermarks. These watermarks embed an invisible signal within images, aiding computers in identifying whether they are AI-generated. For instance, Google has introduced a system called SynthID, utilizing neural networks to adjust pixels in images and incorporate an imperceptible watermark visible to computers but not to the human eye. This mark is engineered to remain detectable even if the image undergoes editing or is captured via screenshot. In theory, such tools could assist companies in enhancing their content moderation capabilities, enabling faster detection of fabricated content, including nonconsensual deepfakes.

Identifying AI-generated images with SynthID

Watermarks represent a valuable tool that facilitates the swift identification of AI-generated content and the recognition of harmful posts warranting removal. Implementing watermarks by default in all images would raise the difficulty level for perpetrators seeking to create nonconsensual deepfakes.

These systems are currently experimental and have yet to be extensively adopted. Moreover, determined attackers can still manipulate them. Companies must implement this technology universally across all images. For instance, Google’s Imagen AI image generator users can decide whether their AI-generated photos should include the watermark. These factors collectively restrict their effectiveness in combating deepfake pornography.

Protective Shields

At the moment, all the images we post online are free games for anyone to use to create a deepfake. And because the latest image-making AI systems are so sophisticated, it is growing harder to prove that AI-generated content is fake.

However, various new defensive tools enable individuals to safeguard their images from AI-driven exploitation by distorting or warping them within AI systems.

An example of such a tool is PhotoGuard, which researchers at MIT created. It functions as a protective shield by subtly altering the pixels in images, making them invisible to the human eye. When an AI application such as the image generator Stable Diffusion is used to manipulate an image treated with PhotoGuard, the outcome appears unrealistic. Another comparable tool is Fawkes, developed by researchers at the University of Chicago, which envelops images with concealed signals, thereby complicating facial recognition software’s ability to identify faces.

MITs “PhotoGuard” Breakthrough Safeguards Images from AI Manipulation

They are introducing another innovative tool, Nightshade, which offers a means for individuals to counteract their inclusion in AI systems. Developed by researchers at the University of Chicago, Nightshade applies an invisible layer of “poison” to images. Initially designed to safeguard artists against unauthorized scraping of their copyrighted images by tech companies, this tool holds potential for broader applications. In principle, Nightshade can be utilized on any image its owner wishes to shield from being scraped by AI systems. When tech companies unlawfully acquire training material online, these “poisoned” images disrupt the AI model. Consequently, images of cats might transform into dogs; even images of Taylor Swift could undergo such transformation.

These tools increase the difficulty for attackers attempting to exploit our images for harmful purposes. They demonstrate the potential to protect private individuals against AI image misuse, particularly if dating apps and social media companies integrate them as default features.

Everyone should utilize Nightshade for every image uploaded to the internet.

These protective shields are effective against the most recent AI models. However, there needs to be assurance that future iterations won’t be capable of circumventing these safeguards. Additionally, they’re ineffective for images already circulating online, and applying them to celebrity photos poses challenges, as famous individuals need more control over the images uploaded online. This situation sets the stage for an ongoing cat-and-mouse game.

Regulation

While technical solutions have their limits, enduring change necessitates stringent regulation.

Taylor Swift’s widely circulated deepfakes have revitalized initiatives to curb deepfake pornography. The White House characterized the incident as “alarming” and called Congress to enact legislative measures. The United States has employed a fragmented, state-by-state strategy in regulating the technology. For instance, California and Virginia have outlawed the creation of nonconsensual pornographic deepfakes, while New York and Virginia prohibit the distribution of such content.

Nonetheless, there is a potential for federal intervention. A newly reintroduced bipartisan bill in the US Congress aims to criminalize the sharing of fabricated nude images at the federal level. Additionally, a deepfake pornography incident at a New Jersey high school has spurred lawmakers to introduce the Preventing Deepfakes of Intimate Images Act. The heightened awareness resulting from Swift’s case could garner further bipartisan backing.

Lawmakers worldwide are advocating for stricter regulations concerning this technology. The UK’s Online Safety Act, enacted last year, prohibits the dissemination of deepfake pornography material, though not its creation. Offenders could potentially be sentenced to up to six months of imprisonment.

In the European Union, several new bills address the issue from various perspectives. The comprehensive AI Act mandates that creators of deepfakes disclose the AI’s role in generating the content. At the same time, the Digital Services Act will compel tech companies to expedite the removal of harmful content.

China’s deepfake legislation, implemented in 2023, is the most extensive. Deepfake creators in China must implement measures to prevent the misuse of their services for unlawful or detrimental intentions. They must also obtain consent from users before generating deepfakes using their images, authenticate individuals’ identities, and clearly label AI-generated content.

The regulation provides victims with avenues for recourse, ensures accountability for creators of nonconsensual deepfake pornography, and acts as a potent deterrent. It unequivocally communicates that producing nonconsensual deepfakes is unacceptable. Implementing laws and public awareness campaigns that categorize individuals who create such deepfake porn as sex offenders could have a tangible impact. This would challenge the casual attitude that some individuals hold toward this content, dismissing it as harmless or not constituting genuine sexual abuse.

Enforcing these laws presents significant challenges. With existing methods, victims may find it arduous to identify their assailants and construct a case against them. Moreover, the creators of deepfakes might operate from different jurisdictions, further complicating prosecution efforts.

The case of Taylor Swift’s deepfakes and similar incidents worldwide underscore the critical need for a comprehensive approach to address the societal impacts of AI, combining education, technological solutions, and regulatory measures. As educators, we have a pivotal role in shaping AI’s understanding and ethical use among high school students. By integrating discussions on the societal implications of AI into our curricula, we can empower the next generation to use AI responsibly and advocate for and contribute to the development of fair and ethical AI systems. Our efforts in educating young minds about the potential misuse of AI, such as nonconsensual deepfake pornography, and the importance of digital ethics can be a significant step towards fostering a safer and more conscientious digital world. This approach addresses the immediate concerns and lays the groundwork for a future where technology serves humanity in a manner that is respectful, ethical, and aligned with our collective values.

This article was written by Rooz Aliabadi, Ph.D. (rooz@readyai.org). Rooz is the CEO (Chief Troublemaker) at ReadyAI.org

To learn more about ReadyAI, visit www.readyai.org or email us at info@readyai.org.

--

--

ReadyAI.org
ReadyAI.org

ReadyAI is the first comprehensive K-12 AI education company to create a complete program to teach AI and empower students to use AI to change the world.