The Rule of Law on Instagram: An Evaluation of the Moderation of Images Depicting Women’s Bodies on Instagram
In a new article published in the UNSW Law Journal, we investigate whether a sample of 4,944 like images depicting women’s bodies were moderated alike on the social media platform Instagram. Overall, we find that up to 22 per cent of images are potentially false positives — images that do not appear to violate Instagram’s content policies and were removed from the platform. We argue that this result, among others, is a significant cause for concern which poses an ongoing risk of arbitrariness for women and users more broadly.
The moderation of user-generated content depicting women’s bodies is a highly controversial issue that continues to play out across social media platforms.
Content moderation refers to the processes through which platform executives and their moderators — whether humans, artificial intelligence systems or both — set, maintain and enforce the bounds of ‘appropriate’ content based on many factors, including platform-specific rules, cultural norms or legal obligations. We argue that decisions around the appropriateness of content are ultimately regulatory decisions in the way that they attempt to influence or control the types of content we see and how and when we see it.
The problem is that platforms moderate content within a ‘black box’ that obscures internal decision-making processes from the view of over two billion active monthly social media users around the globe. The lack of transparency around the decisions that platforms make continues to limit public understandings of how user-generated content is moderated in practice.
Ongoing controversies around the moderation of images that depict women’s bodies on Instagram underline how little is known about content moderation. We focus on Instagram given the number of different and often conflicting claims around how images of female forms are moderated on the platform.
For instance, some publications sometimes claim that Instagram is arbitrarily removing — also described as ‘banning,’ ‘censoring’ and ‘deleting’ — depictions of female forms in seemingly arbitrary or biased ways. Others have accused the platform of ‘blatant fat-phobia’ and ‘fat-sham[ing]’ women in ways that could potentially reinforce heteronormative body standards, among other things.
By contrast, some news publications show that thin-idealised images of women are also removed from Instagram, and claim that the platform is creating a positive space for the depiction of all body types. While there is a diverse range of claims about the platform’s processes for moderating depictions of female forms, a common theme is confusion among users. This is partly because it is difficult for users to identify what rules apply to different types of content and why certain content is removed while other apparently similar content is not.
In our new publication, we attempt to shed light on some of these competing claims by empirically investigating whether images depicting women’s bodies on Instagram are moderated in a way that aligns with the Anglo-American ideal of the rule of law.
The rule of law
The rule of law, at its core, ultimately aims to limit, control or restrain potential arbitrariness in the exercise of governing power. While this ideal has been historically limited to the public relationship between the state and its citizens, we situate this article within the emerging project of digital constitutionalism that contends that public governance values can and should influence the private rules of non-state actors, including the policies of social media platforms.
Given that there is no universal set of rule of law values, we focus on formal equality, certainty, reason giving, transparency, participation and accountability. We argue that any attempt by Instagram to moderate, or regulate, content should adhere to these basic rule of law safeguards, which provide a well-established language to articulate and work through what is at stake for women and other users in the potentially arbitrary exercise of power over content.
What we did
In this paper, we empirically investigate whether a sample of 4,944 like images depicting (a) Underweight, (b) Mid-Range and (c) Overweight women’s bodies on Instagram were moderated alike in practice. After using innovative digital methods to collect images, we used content analysis to identify whether coded images in like categories were removed. Coding then enabled us to investigate true negatives (images that do not appear to violate Instagram’s policies and were not removed), and potential false positives (images that do not appear to violate Instagram’s policies and were removed), in each category. None of the images in this study appear to be explicitly prohibited by Instagram’s current content policies.
It is important to note here that the subjects of these images might not, in fact, identify as a ‘woman’ or ‘female’. It is a limitation of the scope and method of this article that, by analysing decontextualised images against a binary classification of gender, we unfortunately are unable to sufficiently engage with the pressing concerns of transgendered and non-binary people. Our aim is for this study to lay the foundations for more substantive future work that examines the impacts of content moderation from more diverse perspectives.
What we found
In stark contrast to the Anglo-American rule of law ideal, which is characterised by its opposition to arbitrary power, our results show that images were inconsistently moderated across all categories. The probability of removal for images that depict Underweight women’s bodies is 24.1 per cent followed by 16.9 per cent for Mid-Range and 11.4 per cent for Overweight women’s bodies.
The overall inconsistent trend of content moderation that we observe leads us to two key findings. The first is that across the Underweight, Mid-Range and Overweight categories, up to 22 per cent of images that were removed by Instagram or by the user do not breach the platform’s policies, and are therefore potentially false positives.
The second main finding is that the odds of removal for an image that depicts an Underweight and Mid-Range woman’s body is 2.48 and 1.59 times higher, respectively, than for an image that depicts an Overweight woman’s body.
There are a number of possible explanations for these inconsistencies, principally content removal by users themselves or direct regulatory intervention by Instagram. Inconsistencies could also arise from the immensely difficult task that Instagram, like other platforms, faces in attempting to apply standards of appropriateness to content from all corners of the globe. However, given the secrecy around the internal workings of the platform’s moderation processes, it is not currently possible to identify whether differences in content moderation arise as a result of cultural norms of use or Instagram’s direct intervention. This lack of transparency makes it difficult for us to come to definitive conclusions around how and why content is removed more broadly.
What our empirical results tell us
Our results suggest that there is certainly support for concerns that some like images that depict women’s bodies are not moderated alike on Instagram in practice. This is concerning for several reasons, including that it suggests that the platform is amplifying the expression of some female users while silencing others. Another is that inconsistencies do little to guide individual behaviour and significantly limit the extent that users are able to understand and learn the bounds of acceptable content.
Interestingly, the high probability of removal and odds of removal for the Underweight category suggests that claims that Instagram is less likely to remove thin-idealised images of women could be overstated. One possibility here is that, in response to long standing concerns that Instagram perpetuates harmful stereotypes of the thin ideal, the platform may have developed practices that are especially protective of certain types of body positive content.
Overall, our results raise concerns around the alignment between Instagram’s governance practices and our selected rule of law values. We argue that the lack of formal equality, certainty, reason giving and user participation, and Instagram’s largely unfettered power to moderate content with limited transparency and accountability, are significant normative concerns which pose an ongoing risk of arbitrariness for women and users more broadly.
We conclude by outlining several steps that Instagram can take now to improve the transparency of its moderation processes, one of which is to publish the internal guidelines that its moderators follow behind closed doors. We stress that allegations of potential arbitrariness in the outcomes of content moderation will not be addressed by continuing to regulate content in secret; if Instagram, like other platforms, wishes to appease growing concerns about its moderation processes, it must take steps to enable some degree of external verification and accountability. While making these improvements are not going to be easy, they are crucial to help identify arbitrariness where it exists and to allay the suspicions and fears of users where it does not.
Read the full journal article here.