Intersectionality of AI-Powered Deepfakes: Lessons from Bollywood

Fidutam
Fidutam
Published in
7 min readJan 23, 2024

Authored by: Maya Sherman, Editorial Writer, Fidutam
Edited by: Leher Gulati, Editorial Director, Fidutam

Intersectionality of AI-Powered Deepfakes: Lessons from Bollywood

The intersection of Bollywood and AI technologies, particularly deepfake videos, raises an ethical controversy on the twofold role of digital innovation in the cinematic industry and the accumulated harm to vulnerable groups in society. Recent developments in generative AI have accelerated the spread of deepfake videos across industries. While the literature tends to focus on the impact of deepfake on critical infrastructure, particularly electoral processes, this article will discuss the role of AI in Bollywood movies and its miscellaneous functionalities across geographies.

50 Shades of Bollywood

Online evidence suggests the increased spread of deepfake videos among Bollywood actresses. Priyanka Chopara Jonas has recently been associated with a manipulated video showcasing her visuals, including fake audio discussing yearly earnings and brand promotions. Another deepfake portrayed Anushka Sharma and Aishwarya Rai Bachchan discussing investment opportunities. Other evidence has included controversial content and fake visuals of the actress Kajol (which allegedly demonstrated her changing clothes), a fake nude picture associated with Kriti Shanon, and another image of Katrina Kaif, falsely featuring her famous towel fight scene from the movie Tiger 3.

Amongst these incidents, the deepfake video of the actress Rashmika Mandanna, dressed in a black yoga bodysuit and smiling at the camera, has been criticised heavily by the actress herself. As she commented in a press conference in Hyderabad:

“Deepfakes have been around for a while and we’ve normalised them, but it isn’t ok […] Today, as a woman and as an actor, I am thankful for my family, friends and well-wishers who are my protection and support system. But if this happened to me when I was in school or college, I genuinely can’t imagine how I could ever tackle this. We need to address this as a community and with urgency before more of us are affected by such identity theft”

Crucially, the increase in the generation of deepfake videos targeting Bollywood actresses has raised public scrutiny and concerns, leading to further governmental discussions and declarations on anti-disinformation policies and regulations, focusing on the mitigation of deepfake on social media platforms. Consequently, these platforms were required by the Indian government to erase fake contents and deepfake videos within 36 hours following the complaint submission.

Nonetheless, these guidelines may not serve as a preventive measure, but rather as a post-uploading mechanism to mitigate greater content circulation. It is essential to foster adequate preventive measures, such as supporting real-time fact-checking of visuals or faster user valuation of online falsehoods.

Intersectionality of Deepfakes and Gender

While the concept of online falsehoods and disinformation is not novel to regulators, the accelerated pace of generative AI and video generation has renovated the policy debate on deepfake mitigation. As a term, deepfakes usually refer to digitally manipulated media aimed at mimicking an individual to misrepresent or impersonate their actions. This technological method is empowered using AI methods, enabling one’s voice and visuals to be more accurate based on previous datasets.

In the context of the Bollywood industry, one may suggest an association between the deepfake videos and the victims’ genders, as most incidents involved fake visuals and audio of famous actresses in the industry, with some indicating controversial content of nudity. The recent incidents have ignited public discourse on the aptness of AI-driven tools in cultural exhibitions and the aligned policy measures moderating their impact in contextual settings.

Digital Identity Protection or Diverse Visual Performance?

Interestingly, the Bollywood narrative across male actors has received dual feedback, as some actors have decided to fight for ownership of their digital identity while others have embraced the use of AI-driven computer-generated imagery tools in movies, particularly among the elder generation of actors.

A recent legal verdict of Delhi High Court, granted the actor Anil Kapoor the right to prohibit several entities from using his name and visuals through AI technologies for any commercial purpose. Some may estimate that additional actors will follow the lead of Kapoor and trademark their identity to deter online manipulation or exploitation, particularly actresses who are more prone to non-consensual deepfakes. This case exhibits the possible ownership over biodata in its digital form and the inability of third-party actors to access it without consent. The question of ownership of digital biodata exceeds the scope of this article, but raises core issues on one’s rights in public digital mediums.

Notwithstanding, AI methods revolutionise de-ageing technologies and enable actors to continue starring in Bollywood movies at older ages. Online evidence suggests the use of such technologies mainly across male actors, inflicting again on the intersectional nature of tech tools in cinema. While this can potentially be beneficial for mature actors to play a greater number of roles and ages, this notion also sparked the scenario in which Bollywood directors might possibly use digital mimics of costly Bollywood actors in movies. This raises the question of the future of work in the cinematic industry and actors’ rights to their own visuals and audio.

Recent online critics have exposed the alleged usage of deepfake in the movie ‘Archies’, where users posit that the debut performance of Sushana Khan is fabricated and was engineered out of the actual performance of her father, Shah Rukh Khan[8]. Despite the director’s claims of the authentic performance of Sushana, this incident shows the ease of the allegation and thinking process, driven by the emergence of deepfake videos.

Along with above, should regulations be unanimous across the industry to avoid the use of non-consensual deepfake videos? Seemingly, there is no concrete online evidence of female actresses in Bollywood using these technologies to modify their visuals, inflicting increasing threats on women as deepfake victims.

From an international perspective, one may mention the positive outputs of deepfake videos. For instance, a deepfake video switched the faces of male soccer players with their female counterparts to undermine gender stereotypes. Therefore, one may suggest that the ethical scrutiny of deepfakes in cinema should also consider the possibility of educating the audience on positive human scenarios, which are still not at reach in mainly masculine-driven disciplines such as sports and cinematic roles.

Back to Reality

The creation of deepfakes exceeds the cinematic industry and its impact on vulnerable groups. The literature has discussed the alarming yet frequent intersection between deepfake videos, sextortion, and pornography, targeting mostly female users online. Despite the increased scholarly attention to the impact of deepfake on electoral processes, evidence suggests a significant portion of deepfake videos included non-consensual porn, while mostly targeting women has been tracked online deepfake videos.

This notion has a severe implication for women’s safety and welfare in the digital space, as highlighted above by the actress Mandanna. Some scholars associate the increase of inappropriate deepfake videos with the COVID-19 pandemic due to the increased online activities during times of physical quarantine.

The risks of such fabricated content can be alarming in conservative settings. Recent evidence suggests the killing of an 18-year-old woman due to the release of fake footage. The young female was killed by her relatives in Pakistan’s remote Kohistan province due to a fake image of her with another male. While the detrimental violent implications of AI have been explored in the past, especially in the context of improper linguistic translation on social media platforms, the accelerated deepfake circulation through the speed of generative AI, raises notable threats to vulnerable communities.

In another incident in South Asia, a transgender member of Pakistan’s Karachi Municipal Council mentioned the risks of deepfakes to her career, as she became a victim of deepfake content circulated online. The broad threats of automated generation of fake videos can therefore lead to the physical and digital exclusion of women from public spaces, both physical and digital, and the constant attacks on their wellbeing using these technologies.

Deepfake for (Safer) Good?

The above highlights a broader controversy over the existing data use by social media platforms of one’s digital biodata by consent. Other commercial uses include YouTube recommendations for videos or tailored ads across Meta’s platforms, which are not perceived as controversial.

The creation of deepfake videos raises philosophical questions undermining human identity. How can we define human senses in a world where we can no longer trust our eyesight and hearing? In many ways, AI requires us to use different senses to observe human reality and become more critical of the data we consume to make decisions or gather insights. Critically, the dual outputs of deepfakes require rigorous ethical oversight, questioning the allowance of consent-driven deepfakes to the public when malign actors are more likely to abuse this capability for nefarious purposes.

Different from the content generation of other sources, one’s biodata should be guarded, and therefore a stricter regulatory approach might be preferred. These thresholds are crucial mainly for vulnerable and indigenous communities but should be understood as collective efforts ensuring AI methods are deployed safely across livelihoods.

This article argues that the data doctrine of cryptocurrencies and its monumental scholarship to the technological community could potentially serve as a guiding model for encrypted and distributed data ecosystems, wherein one has full ownership of their digital biodata and footprints, except for critical purposes such as security and medical needs.

This accentuates the need to redefine these exceptional settings, wherein anonymous user data is transferred directly to authorised entities without consent, such as in biometric datasets or other scenarios of national security. Nonetheless, these critical settings cannot be the primary lens of one’s data sovereignty, due to the high digital adoption and saturation of emerging technologies across human livelihoods. One’s digital identity should be acknowledged under similar and equally protective rights and obligations as of their physical one.

The continuous controversy between the individualistic data approach reflected by Delhi’s High Court and the collective one represented by the development of digital public infrastructure serving the masses in their access to government services requires a distributed, encrypted, and open-source approach to data. The scholarly community should adhere to redefining data sovereignty at the individual and collective levels, guaranteeing one’s autonomy, safety, and accessibility to and within data-driven interfaces.

Follow Fidutam for more on responsible technology.

--

--