Image by Gerd Altmann from Pixabay

Politics of neutrality

Abhishek Venkatesh
Digital Diplomacy
Published in
8 min readSep 5, 2020

--

The recent call for scrutiny of Facebook, represents a growing conflict among the varying notions of neutrality

Of late, social media giants have come dangerously close to political storms in different countries. Facebook in India was the most recent one, with accusations that it allowed hate-based content by certain members of the ruling party, out of fear of business ramifications. While a detailed explanation followed, that reiterated Facebook’s commitment to its content guidelines and the philosophy behind the same, this is hardly the first time that a social media giant has been accused of an observable bias. A little before that, Twitter too found itself in a similar muddle. Its own official account censured, and called out President Trump for advocating violent suppression of the Black Lives Matter protests, leading him to question Twitter’s commitment to neutrality, and if it was an unfair tool that favoured Democrats. And even earlier, Twitter faced backlash for taking down accounts in Hong Kong and China, in light of the protests.

Social media platforms can singularly be credited with redefining how technology, politics, and society interact. This interaction, that has now created lasting socio-political networks, is also largely sustained by how we interpret neutrality, a quintessentially political value. However, in what has now become a major concern in public policy, social media giants have found themselves at the dead centre of this idea of neutrality. Thus, technology may have led to greater democratisation of political values, but it has also brought to the forefront, the role of principals of such technologies in the distribution of these values. This can also be seen in an insightful work by Katherine Nahon, where she describes three ways that platforms exercise power- influencing decisions, shaping political agendas, and shaping perceptions (Nahon 2015)¹.

This article, building on those ideas, aims to focus on the following questions that have implications this issue,

· What do we imagine neutrality to be?

· Are social media platforms innately apolitical, or neutral?

· Does the ambiguity of roles, ranging from intermediary, to publisher, and moderator, affect their implementation of neutrality?

· Does the responsibility for their neutrality fall on governments, or themselves?

Shaping Neutrality

Neutrality in the political sphere can refer to a principled distance from all ideologies, and opinions reflecting the same. What also complements this idea, is an equal treatment of all such legitimate ideologies and opinions. Thus, there are two aspects of neutrality to consider here- what is displayed to the user (and thus affecting his/her behaviour) and what the platform itself does (explicit advocacy).

Homophily, or the tendency of similar minded people associating with each other, is a characteristic behaviour of people on social media². However, while this is an expected tendency in a medium that aims to connect people, it is the aggregation of homophilic networks (pages/groups) that pose concern. These aggregations, owing to high amount of activity, may display bias, as algorithms may be programmed to display information from such networks to users. Consequently, some major issues around targeting user behaviour include- use of personal and non-personal data for displaying selective information and targeted political advertisements, opaque search indexing that filter search results to other online (and sometimes offline) activities, and high discretion to gatekeepers/mediators of online communities to control flow of information (that in turn affects accessibility to information). These problems are essentially algorithmic in nature, and the starting point of any association with neutrality will lie in the architecture of such algorithms and thus, the platform itself.

The second problem- of what the platform itself does- is more a matter of principle. Platforms are based on strong liberal principles, particularly those concerning free speech and access to information. In this regard, there is a moral force to stay committed to these principles, but how they manifest is what is interesting. Twitter’s censure of President Trump, was a case of how twitter perceives itself as a user, affected by Mr Trump’s opinion. While it is true that equality may also end up legitimizing harmful extremist views, it is exactly in this space that the likes of Twitter would need a transparent, and politically acceptable standards for enforcing neutrality. In a normative sense, neutrality here would be the platform sticking to consistent application of its content guidelines, and any deviation, or observable bias, would constitute a bias.

Perils of an identity crisis

The ambiguities around business identities that social media platforms have, have also made it difficult to enforce an idea of neutrality. Social media giants, in different contexts, act as platforms (conduits for people to express and communicate), publishers (content as intellectual property, determining the style, layout, and editability of content), moderators (guidelines for posting, actions taken on reported activities, displaying content with greater traction), and activists (responding to prevailing public values and sentiments). The problem with such ambiguity is the lack of a comprehensive legislation, or policy, that can effectively address all these roles. A couple of examples here merit regard.

Firstly, the Communications Decency Act (Section 230) in the USA³, is one of the major legal provisions that address the activities of ‘providers of interactive computer service’ (a blanket term for all internet-based companies). According to this, they are not supposed to be treated as publisher or speaker of information provided by someone else. Additionally, there’s little liability on these companies, if they decide to voluntarily restrict access to, or availability of, certain kinds of information. Essentially, it places onus on users to be responsible for the content they post, while platforms can make necessary changes (including access and availability) without being declared as publishers. These have some major implications-

· These allow platforms to steer away from litigation arising out of inter-user behaviour (such as Trolling), and issues of copyright and IPR infringement (Artwork, tweets, original thought).

· Lack of a compass to navigate through equality or neutrality,

· Dangers that algorithms may massively favour divisive clusters of information (traffic driven), leading to polarisation of users.

As a second example, it’s important to have a look at the Intermediary Rules in India (read together with proposed amendments to section 79 of the IT Act⁴). These involve more complex regulatory challenges. Rise in fake news and hate based content has prompted the government to explore harder stances in its draft amendments, and an increased scrutiny of the ‘safe harbour’ protections that platforms currently enjoy under the act. Mandatory Assistance to State agencies, without a proper framework (when also read with amendments section 69 of the IT act) raises issues of State capacity, accountability, and surveillance. On the other hand, the need for ‘proactive monitoring’ by platforms, absolves them of their ‘passive’ nature, and can no longer be treated as just conduits for expression. Further, the recent Facebook incident raises questions into the efficacy of oversight mechanisms over ‘prohibited content’, which is also a requirement under existing Intermediary Rules.

A brief overview of Facebook’s overall transparency strategy (in relation with political neutrality)

Sharing the onus of accountability

What can platforms do?

As a starting point, political undertones of any deviations from guidelines, can be assessed through pan party reviews, or panel of political observers, who can then point out the existence of a trend, beyond reasonable doubt. Further, ‘consultative censure’ of public leaders, that involves dialogue between the concerned leader and the platform, would go a long way to uphold mutually acceptable standards of political expression.

Transparency initiatives, between Big Tech and Academia, can go a long way in ensuring a government-free approach to build neutrality. Facebook instituted the Data Transparency Advisory Group (DTAG), in 2019, comprising of eminent academicians of the US, from various fields. Twitter too, a little earlier, set up the trust and Safety Council, in this direction. Another visible example in this regard is the Social Science One project, where one of the collaborations between Facebook and researchers allowed for greater insight into the role of social media in elections. This would be a good move toward social accountability, and credible oversight over algorithmic developments.

Lastly, neutrality must also reflect in the organization of platforms. One of the major concerns around Facebook’s appointment of the DTAG, is it’s centralized functioning, and the lack of similar such groups across various regions. In addition, it is also not clear, how communications flows from content review operations across regions to this Group. Another issue of organization, lies around the issue of designing metrics. A key observation from Facebook’s transparency report, highlights that the absence of metrics for ‘Prevalence’ for hate speech, bullying, and organized hate. While they make it clear that it is still evolving, having a consultative approach to design these very metrics (even for Twitter), would be an affirmation of neutrality, and help in dealing better with bias that may arise in the future.

What can governments do?

Personal Data and Non-Personal Data can be extremely useful, but invasive tools for political advertising. From meta-analysis of voting patterns across constituencies, to direct appeals, political advertisements must be built on robust data protection and privacy frameworks.
In this regard, better oversight by the Competition Commission of India is vital. It can help curb near monopoly over the data of a billion users, that otherwise may result in an asymmetric power to influence political identities of citizens. This must work alongside the companies, and the proposed Data Protection Authorities (under the Personal Data Protection Bill, and the Report on Non-Personal Data), to push for greater transparency and open sourcing of algorithms. There must also be oversight over political advertising standards and mechanisms, in confidence with Advertising Standards Council of India, and the Election Commission of India.

Further, classification of Intermediaries⁵, would reduce ambiguity in roles. There is a need for creation of a category of content-based intermediaries, and other service-based intermediaries that build on the evolving definitions, scope, and liabilities, largely looked into by courts till now. This would reduce the case by case approach, that started with the Avnish Bajaj case⁶, to the Tik Tok related orders by Madras HC⁷, and in the process, outlining their accountability.

Lastly, Governments must also reduce the reliance on ad hoc arrangements for communication with social media giants, and rely on a more stable dialogic process. This enables trust, and greater predictability in public policy, with the vision of a connected social future going hand in hand with regulatory frameworks.

In conclusion

The purpose thus far has not been to assign a Leviathan-like characteristic to Big Tech. It is a very indispensable manifestation of globalisation, but the regulatory gaps pose a risk of antagonization of both, the governments and these tech giants. Government control over the most powerful symbol of free speech would strengthen the hold of totalitarianism, while a total retreat would allow for uglier versions of the Cambridge Analytica scandal. It would therefore, be unfair to put the burden of regulation, or responsibility entirely on governments, or companies.

Any burden of neutrality must be shared, and must be progressive, much like our commitment to evolving democratic ideals.

References

1. Nahon, K. (2015). Where There is Social Media There is Politics. In A. Bruns, G. Enli, E. Skogerbo, A. O. Larsson, & C. Christensen, The Routledge Companion to Social Media and Politics (pp. 39–55). Routledge. doi: 10.4324/9781315716299–4

2. Benkler,Y. (2006). The wealth of networks: how social production transforms markets and freedom. New haven: Yale University Press.

3. Ruane, K. A. (2018). How Broad A Shield? A Brief Overview of Section 230 of the Communications Decency Act. Retrieved from Federation of American Scientists: https://fas.org/sgp/crs/misc/LSB10082.pdf

4. Sadana, T., Rastogi, A., & Taneja, A. (2020, 05 12). Impact Of Proposed Amendments To Intermediary Guidelines. Retrieved from Mondaq: https://www.mondaq.com/india/it-and-internet/932340/impact-of-proposed-amendments-to-intermediary-guidelines

5. Bansal, S. (2020). Content regulation lapses cast doubts on Facebook’s biz model. Retrieved from Livemint: https://www.livemint.com/companies/people/-content-regulation-lapses-cast-doubts-on-facebook-s-biz-model-11598232566696.html

6. Sharat Babu Digumatri vs Government of NCT of Delhi, 3 CompLJ364 Del (High Court of Delhi 2005).

7. Software Freedom Law Center. 2019. ‘Madras HC Bans Downloading Tik Tok’. (available at https://www.sflc.in/madras-high-court-bans-downloading-tiktok)

--

--