Chapter V — Strategies to prevent NCII spread and sexting abuse

Zubair Ashraf
Digital Narratives
Published in
5 min readFeb 2, 2024
Image by Casimiro.

Major platforms, including Facebook, Google and Twitter, use copy detection systems, such as Microsoft’s image fingerprinting technology PhotoDNA, to scan and take down copies of extremist, violent, child abuse and non-consensual intimate content (Franks, 2017, pp. 23–24; Douze et al., 2021, p. 2). However, the use of this technology to take down nonconsensual intimate images and videos, abbreviated as NCII, may occur only after the content is flagged or reported as non-consensual. It is because, unlike child abuse and violent content, NCII is difficult to identify. An image or a video containing nudity or sexually explicit content does not mean that the material is harmful or illegal and such a determination needs an investigation into the context (Franks, 2017, p. 24). Furthermore, flagging of NCII is essential for the investigation to begin. However, to be able to do so effectively, platforms must be able to sort through massive amounts of contexts and make rapid decisions about them (Ibid.).

This points towards scalability or removal policies and operations of the platforms. It means that the platforms must have a sufficient number of socially, culturally, politically, geographically and gender-diverse human resources dedicated to NCII, ensuring swift responses to complaints and making rapid decisions on leaked content before it goes viral. Having such a human-centric mechanism in place can also help in redressing the issues that have been observed in Pakistan.

According to Digital Rights Foundation Pakistan, referred to as DRF, seeking legal help is not an ideal solution for many, especially women and girls who might want to keep their situation private from their families in anticipation of increased restrictions on their autonomy because of victim blaming (Basit, 2022, p. 14). Not to mention that around 1,000 women are killed each year in the country in the name of patriarchal honour (Anees, 2022). Some women and girls have also committed suicide over their NCII (Dad and Khan, 2017; Dawn, 2010).

Franks (2017, p. 26) proposed a consent regime to stop NCII: Once an image or video is flagged as nonconsensual or the platforms detect an explicit image, it could start a mechanism, such as launching a pop-up message, to ask for verification that the user is authorized to share the impugned content. People who are willing to share their own sexually explicit content would not be burdened by it, but this barrier would serve as a potential deterrent or a kind of psychological speed bump for people sharing NCII (Ibid.). Of course, people could lie or there could be cases of mistaken identities; in such cases, the platforms could remove or hide the content until the consensual or nonconsensual nature of the sexually explicit content is confirmed.

This proposal can be argued in the sense that having such a mechanism in place would eventually lead to a backlog of contentious content, leading to obstructions in the expression of sexuality on platforms. Therefore, to prevent such a situation, the onus is back on the user to behave or express themselves responsibly.

With a focus on preventing sexting abuse on platforms like WhatsApp, Franco, Gaggi, and Palazzi (2023, p.2) have developed SafeSext, a proof of concept of a messaging system involving a forwarding control algorithm. Their system recognizes sexual images through Google Cloud Vision API, applying a perceptual hash function to associate an owner with each image (Ibid.). This seems the same as image fingerprinting technology. After hashing a photo, the system defines a forwarding policy that could block the forwarding attempt to someone else while alerting the owner about the attempt (Ibid.). This can provide some level of protection to sexting practitioners, pointing towards the co-responsibility of social platforms towards possible sexting-related abuse harming users (Ibid., p. 3). This, however, assumes that sexting abuse mainly occurs due to unauthorized forwarding of personal content (Ibid., p. 6).

There is also a possibility of saving content on the device and sending it later through the same (Ibid.) or uploading it to other platforms. Therefore, a forwarding control algorithm with the absence of the possibility of saving content on the device could make platforms safer by design (Ibid.). In the case of NCII, it is crucial for the victims to become immediately aware of their content being leaked to other people so they can take action to stop the spread as soon as possible before it gets too late (Ibid., p. 9).

In summary, this discussion argues that by having a forwarding control algorithm, image fingerprinting technology and not allowing content to be saved on devices, platforms can become safer by design and architecture. Such systems, for sure, can help in preventing sexting abuse and NCII spread, by giving users an agency over their self-generated content and providing them a safer online experience. However, this problem cannot only be solved with putting these technological measures in place as they may also put restrictions on content that is not harmful or offensive. It can be said that sexting abuse and NCII situations are interdisciplinary topics, and experts from other fields, such as human rights, digital rights, state and legislature, should also be involved before deploying such measures (Ibid.). Chapter VI provides the conclusion of this essay.

--

--

Zubair Ashraf
Digital Narratives

Journalist - MA Digital Narratives - Labor Policies and Globalization