The Opinion
Published in

The Opinion

Photo by Roman Kraft on Unsplash

How to Counter Fake News?

A Critical Analysis of Social Media Regulatory Laws in India

By Ishita Mundhra

This article illustrates the need for strict social media regulation to curb the growing spread of fake news in India which is extremely harmful for the political and social processes alike, especially in light of the current pandemic. It studies the Draft Intermediary Guidelines published in 2018 which seeks to effectively deal with the menace that is fake news. The article analyses the issues in these guidelines and suggests a remedy to the same.

Introduction

The growing phenomenon of fake news has no recognized universal definition, however, the literature surrounding the same establishes that it is any piece of information, regardless of how it has been stated or the medium on which it is posted, that discredits the conviction of experts and institutions by ignoring the presence of non-partisan data. This allows an obstruction to the flow of rational conclusions being drawn from the available objective data resulting in the growth of polarising and inaccurate opinions.

The phenomenon of fake news attacks various motivations — political, economic, subversive, and social and has been immensely proliferated with the growth of social media.

India has been increasingly susceptible to fake news because of the tremendous growth of the social media platforms in the country with an estimated 376 million users on such platforms, coupled with a lack of effective policy and legislative measures to curb the same.

The impact of fake news, especially in the current milieu, has catastrophic effects on economic and political activities. For instance, ill-informed advisory warnings against consumption of chicken which were widely circulated on WhatsApp resulted in a loss of 1.6 billion rupees a day for the fourth-largest chicken producing nation in the world triggering a severe crisis due to excess poultry supply.

Other misinformation capable of triggering health hazards included high Vitamin C intake recommendations, a battery of self-detection COVID-19 tests, and baseless speculation on food consumed by particular communities capable of resulting in the contraction of the disease. This also facilitates the communalisation of the epidemic.

Rampant fake news in India flowing primarily through Whatsapp and Facebook has led to the propagation of heinous crimes like lynching and kidnapping with a reported two dozen deaths as a result of misinformation on WhatsApp in 2018.

Countering fake news is often extremely complex because it requires cooperation from various sectors — law enforcement agencies, legislature, civil society, and the media — which must work in tandem with each other to constructively limit the problem and the associated risks. While provisions in the Indian Penal Code and the Information Technology Act (hereinafter, the “IT Act”) exist to tackle and penalize misinformation, the government released new Intermediary Guidelines in December 2018, which have triggered significant debate to determine the best strategy in countering fake news. This piece will be analyzing these guidelines and assessing its workability.

Photo by Elijah O'Donnell on Unsplash

The Information Technology Intermediary Guidelines (Amendment Rules) 2018

The suggested draft guidelines seek to replace the 2011 Rules that deal with the due diligence obligations that are to be followed by the intermediaries — media platforms falling under the ambit of the IT Act.

Section 79 of the IT Act provides a “safe harbour” to these intermediary platforms and the Rules have been notified in conjunction with this Section. It is argued that the Rules would effectively end all the previous protection that had been afforded to these media platforms.

What are the Concerns Regarding the Rules?

One problematic idea behind the construction of the guidelines stands in the way of its effective implementation. This is the overreaching power that has been afforded to the government in curbing fake news which stands in contravention to the right to privacy afforded to each citizen.

Intermediaries are required to remove “unlawful” content as per the Rules. This stands in contravention to the Shreya Singhal judgment which established that “unlawful” acts only extend to those restricted by Article 19(2) of the Indian Constitution and cannot be given the wider purview that it has been provided in the Rules. Allowing a subordinate executive notice by a Ministry to take precedence over the existing law in the form of the regulations under the IT Act, would constitute an excessive delegation of legislative functions. Delegation cannot be to the extent of altering the scope of the current law, especially since it borders on the violation of intrinsic rights.

Rule 3(8) of the guidelines which call for the intermediaries to remove any content on receipt of government notification within 24 hours also highlight the possibility of arbitrary government acts being carried out to suit political or other motivations. Moreover, a 24 hour time period does not give the intermediary platform enough time to scan the content and understand the reason for its removal affecting the autonomy of the platform while increasing its liability.

The Rules also allow for tracing the origin of certain content deemed to be fake news or anything deemed not permissible by “any government authority” which would allow the governmental authorities to breach the end-to-end encryption provided by platforms like WhatsApp. This is in an overreach of what is allowed by the IT act, which serves as the parent legislation.

The concerned government agency will also be able to access the private information of the social media users which can be used to scrutinize more data than permissible under constitutionally set limits. The agency will also be able to use the expert monitoring mechanism of the intermediaries, the extent and nature of which has not been mentioned.

Photo by Tingey Injury Law Firm on Unsplash

How Can The Rules be Modified to Effectively Counter Fake News?

The usage of vague and ambiguous terms — “unlawful content”, “grossly harmful” and “threaten” — should be eliminated and replaced with terms with set recognized definitions and limits to discard any uncertainty.

Rule 3(9) presents an interesting alternative to the theme of excessive governmental control binding the rest of the rules. It calls for the use of automated tools and other machinery to detect and curb news which may be “unlawful” or fake. While this constructive has also garnered severe criticism from a legal standpoint, it may serve as the most practical solution to the technical complexities involved in the detection of fake news if the former dilemma is resolved by using more neutral terms and reducing the control given to the government.

Using only human labour for the detection of fake news is less effective because humans may not be able to distinguish between news which is fake and real as they are swayed by various socio-political motivations and often fall for click baits which propagate fake news. Moreover, humans are mostly only proficient in detecting celebrity fake news, while automated tools can pervade other sectors well as proving the diversity of their application.

While the possibility of AI-driven mechanisms being biased exists and concerns regarding the abolition of the “safe harbour” granted to the intermediaries remain, it is contested that automated tools can still be moulded in an effective way to counter these problems better than humans. They have to be modelled on two bases to effectively detect fake news — a language approach and a fact-based approach. The former is based on analyzing punctuation, type of words used to explain a scenario, and the emotional content of the message which is instrumental to distinguish fake news from satirical news pieces. The latter is based on checking how widespread the news is to comprehend its credibility and thus, relies on fact-checking technology already in existence.

India has experienced a monumental growth of social media platforms, with each active internet user spending at least 30 minutes a day on various social media platforms browsing through a variety of content garnering interest from all strata of society. In such a scenario, it becomes critical to filter fake news with appropriate mechanisms because the daily exposure to the same is beyond what we can imagine.

Legislation and governmental regulation to counter fake news should be framed at the helm of existing parent legislation and constitutional obligations to protect the plight of the intermediaries and save them from excesses by governmental agencies.

It becomes essential to study social factors promoting fake news as well in a successful bid to counter the same through legislation. The primary motivation for the majority of fake news circulated is ideological benefit — popularizing a particular stance of certain subject matters, which may be harmful and may align with majoritarian perspectives.

Individuals also feel the need to spread false content to conform with certain peer groups which may bring them social capital in society or other gains and therefore, they are forced to subscribe to a particular viewpoint.

Social comparison — the phenomenon of moulding your opinions to those shared by people around you due to one’s own inability to evaluate the information — also plays a major role. Cognitive factors like the education one has received, the political opinions one gains through socialization processes, targeted advertisements, and economic factors also facilitate this practice.

Therefore, any guidelines or laws on social media regulation by an intermediary must also take account of these social and cognitive factors to constructively counter the menace of fake news.

Photo by United Nations COVID-19 Response on Unsplash

Conclusion

The disastrous impact of fake news has become increasingly evident due to the onset of the COVID-19 pandemic. It seeks to displace current and legitimate narratives by publicizing unresearched and illegitimate material to suit certain ideological convictions held by large political parties and other individuals at the top of the food chain.

The Draft Guidelines which seek to reduce the instances of misinformation on social media platforms place immense obligations on intermediary platforms and also provide excessive power to the government in regulating content. This poses a significant privacy concern for users and must not be overshadowed by the garb of the greater good being affected.

The Rules, however, have scope to improve by utilizing existing technologies taking sociological factors into account, and bring legitimate news agencies and content to the forefront.

Super Big Image (NO CAPTIONS IN HERE)

Ishita Mundhra — 2nd Year, B.A. L.L.B. (Hons.), West Bengal National University of Juridical Sciences

SUPER BIG IMAGE (NO CAPTIONS)

--

--

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store