Do we need new human rights in the era of fake information?

Lara Mikocki
Jun 26 · 12 min read

Fake Culture, The Truth and the Right to not be Misled

Although there are many global challenges and changes, we have seen at least one major development gathering attention in the last 40 years: technology. Though technology is commonly associated with privacy threats, there is another problem: fast-spread fake news, fake social media accounts, fake media, and deepfake algorithms and AI (Bullock and Luengo-Oroz, 2019). I have dubbed this species of information as ‘fake information’. It threatens our political, legal and media systems, not to mention our personal relationships, beckoning the question: do we need new human rights in the era of fake information?

Technology and Current Human Rights

If Facebook were a country, it would be the biggest in the world. The size of Facebook’s user base translates to almost two in seven of the global population using it each month — around 2 billion people (Taylor, 2016). Although technology is much more expansive than Facebook, this kind of scale of information and communication technology examples the potential impact technological assets have on all aspects of life. The wake of these developments ask at minimum for a review of human protection, and at most, new human rights.

As technology weaves into every aspect of society, it inevitably touches upon human beings and their rights in guaranteeing human dignity, autonomy and freedom. However, the last major review of an international human rights treaty, the Universal Declaration of Human Rights (UDHR), was implemented in 1948, over 70 years ago. A simple calculation of technological development and the UDHR’s implementation dictates that new threats to human rights are likely to emerge from technological progress that could not have been foreseen in 1948.

Human rights documentation is not a guarantee for avoiding violation of persons, however the legalised norms do tend to influence and guide (inter)national law and policymaking (Peters, 2019). The UDHR is perhaps the most globally accepted vision of human rights objectives. It has been translated into over 500 languages (UN, 2019) and its principles echoed in the laws of more than 90 countries (Gumbis et. al). Human rights are articulations of the underlying normative idea that all human beings hold equal rights to be respected in their person, regardless of ethnicity, sex, ideology or nationality (SEP, 2019). They are expressed as 30 articles, and although we do not have space to discuss them all here, technologies impact on almost every human right, from the right to life; to the right to privacy and expression; the right to fair trial and the presumption of innocence; workers’ rights; the right to free elections, even the rule of law itself are all impacted (EU Council Algorithms & Human Rights, 2018).

We can also reference world events that have impacted human rights due to technology. The disruption of privacy, surveillance and data — as per the Facebook Analytica scandal; the weaponisation of social media as expressed by the attacks on Rohingya in Myanmar, propagated by fake accounts on Facebook; or even the threat to personal security with widespread cyber bullying.

In reference to fake information, the current UDHR document recognises two relative rights as potential empowerment: the ‘freedom of information’ and the ‘right to the truth.’

Freedom of information is an extension of freedom of speech which states: “everyone has the right to freedom of opinion and expression; the right includes freedom to hold opinions without interference and to seek, receive and impart information and ideas through any media regardless of frontiers” (Article 19, ICCPR). However, this assumes the information is correct, rendering the article as arguably lacking.

Outside the 30 listed articles, UN literature recognises another right: ‘the right to the truth’. Some legal systems consider the right to the truth to be integral to the enjoyment of freedom of information and freedom of expression. The right to the truth was requested by the Office of the High Commissioner for Human Rights as an “invite to (…) provide information on good practices in the establishment, preservation and provision of access to national archives on human rights, and to make the information received publically available in an online database” (A/HRC/RES/21/7). This suggests that the right to truth is merely the responsibility in providing the information, not the correct information. Later in this paper, I further discuss how this legislature is lacking in regards to misinformation.

Fake Information: Historical and Present Day

History is littered with fake information — from myths and stories to, contentiously, religious scripture. Misinformation and propaganda have been features of human communication since at least the Roman times when Antony met Cleopatra (Posetti and Matthews, 2018). History is most likely fake itself. We are relying on documents written by oft single perspective, passed over generations, changing with every interpretation. This poses the idea that fake information today is no different from the past. This is legitimate, however this is not a reason to discount examining misinformation, but actually requests it. With awareness, comes responsibility. An ethical critique of the stories we tell ourselves comes at a time of leaping advancements in research techniques, in combination with the ability to manipulate information faster and more sophisticated than ever before. The context is what separates fake news from the past with today: the advent of the Internet. This creates a sophistication of fake information production; an unprecedented scale on which it is created, and a speed of effectiveness with which it is being disseminated (McGonagle, 2017).

Research documents often reference the post-truth era as fake news, however with the development of deepfake algorithms, a broader term for the phenomenon is necessary, of which I have dubbed ‘fake information’. Fake information can reference any deceptive manipulation of content posing as information.

The key reasons for the increase of such a phenomenon include: the scale and sophistication with which information is produced, the speed and effectiveness of its dissemination, the anonymity provided by social media, algorithms, and advertising, all of them intrinsically related to the Internet. It is therefore evident that [misinformation] is a consequence of the information society: in the big picture, the problem is the entire information ecosystem (Regules, 2018).

One kind of fake information, ‘deepfake’ artificial intelligence technology, promises to create doctored videos so realistic that they’re almost impossible to tell from the real thing (Benjamin, 2019). Not only can existing persons be recreated and manipulated, entirely new persons can be created to deliver manipulated messaging. Next to this, deepfake technology is cheap, easily available, and technically straightforward.

Another more prominent type of fake information is fake news. It is “information that has been deliberately fabricated and disseminated with the intention to deceive and mislead others into believing falsehoods or doubting verifiable facts; it is disinformation that is presented as, or is likely to be perceived as, news” (Regules, 2018). It is so sophisticated, that in North Macedonia, a small publishing house specialises in rewriting fabricated articles for two major copycat websites to target US readers (Oxenham, 2019).

It must be recognised that information that is merely false doesn’t automatically fall under the definition of fake information. This would render a children’s fantasy book inhumane. How far one takes what is non-deceptive misinformation can become unclear. Footage of a US Democratic House Speaker Nancy Pelosi was edited to make her appear drunk, which critics condemned Facebook for keeping the footage on the platform. Political mockery often uses information out of context, deliberately. This brings up the quandary of what is, and what is not acceptable deceptive misinformation, where “an overly broad rule could backfire…” (Chen, 2019). Clickbait, erroneous news, satire, and conspiracy theory information do not constitute fake news per se (Regules, 2018). It is a recognisably challenging task to avoid impinging on the right to expression.

Nevertheless, in regards to regulating fake information, policy-makers should aim to focus on misinformation that is false, inaccurate and misleading, and designed to harm society by deceiving people.

Human Dignity, Autonomy and Misinformation

Human dignity is the root of all other human rights, expressed in Article 1 of the UDHR. This makes misinformation problematic because it can encompass deception, and deception is problematic because dignity and autonomy become inevitably compromised.

The idea of personal autonomy is an active concept in human rights literature, and arguably fundamental to it (Richards, 1981). Every person has to have autonomy so that one can feel free to make decisions. The concept of autonomy was heavily explored by classical and contemporary philosophers such as Jean-Jacques Rousseau (1712–1778) who heavily influenced Immanuel Kant (1724–1804), who then influenced the likes of contemporary influential political philosophers such as John Rawls. In order to argue for the proposal of the right to the truth, we will focus on Kant’s notions of autonomy and universality as measures as to whether misinformation is morally acceptable.

Autonomy, or self-legislation, is at the heart of Kant’s moral theory. Autonomy is valuable because it grants us freedom, and according to Kant we are able to own this freedom because we are rational creatures. We are able to reason, therefore we are autonomous and free. For Kant, this is the source of humanity’s dignity. In a political sense, respect for autonomy is a valuable foundation of international socio-political institutions, where moral duties born from autonomy can translate into legal duties such as human rights. The question then remains, if people deliberately spread incorrect information in order to deceive, are you still making an autonomous choice? I would argue not. If you would like to exercise your autonomy to rationalise a moral choice, this is impossible if the information you have is misleading. How can you decide the right thing to do if you don’t have the right information?

Another feature of Kant’s philosophy illustrates the notion of universality, closely connected to freedom and autonomy. Kant’s proposal for universality helps decide whether something is morally right, which means you act rationally, which means your are autonomous and free. Universality, and therefore moral rightness, follows from a procedure called the “Categorical Imperative (CI)”. The CI helps evaluate a right action through reason, and governs what we ought to will. What we ought to will is called a maxim. For example, in this case of misinformation “it’s right to disseminate incorrect information”. Whether this is morally acceptable, is where universalisation comes in: pure reason is discovered by running the maxim about misinformation via the CI, which commands a logical universalisation of said maxim. This universalisation law states one must act only in such a way that the maxim could be universalised so as to become a kind of “world law” — in that it could be common knowledge to all.

A maxim is ruled out as immoral when it leads to contradictions in becoming a universal law. If misinformation was universalised, the concept of information, and therefore misinformation would not exist, rendering a contradiction. Only when you submit to the CI, you act rationally, and only then, are you autonomous and free. Misinformation would prove you are not, making it morally corrupt, rendering a valid right to correct information.

‘The Right to Truth’ and ‘The Right Not to Be Misled’.

While there is no specific international convention on the right to the truth (and while UN declarations are not binding agreements), certain regional and national courts have confirmed the enforceability of this right within their jurisdictions (González and Varney, 2013). In recent years, the ‘right to truth’ as a concept emerged to urge responses to State crimes where enforced disappearances of persons compromised human rights

(UN-HRC, Study on the right to truth). Though there are some international law documents, the right to the truth is not clearly designated as a human right within the UDHR, and those that are articulated do not clearly specify elements of misinformation as a feature of the right.

Current literature states the right to truth as:

“…[that] which sets out the right of victims to know the truth regarding the circumstances of the enforced disappearance, the progress and results of the investigation and the fate of the disappeared person, and sets forth State party obligations to take appropriate measures in this regard, and the preamble to the Convention, which reaffirms the right to freedom to seek, receive and impart information to that end…” (A/HRC/RES/21/7)

From this description, there is little recognition of misinformation, only the responsibility to provide missing information. This proffers another dimension to the current standing of the right to the truth as first suggested by Dutch philosopher Naomi van Steenbergen: the ‘right to not be misled’. This proposal would decisively respond to information that is false, inaccurate, misleading, and designed to deceive people, therefore compromising human dignity and autonomy. Whether this substantiation would be an addition to current human rights articulation such as ‘the right to the truth’, or a new principle entirely, is a practical question. The discovery here, is that misinformation is not adequately addressed in international human rights legislature.

But, What is The Truth?

A clear contention in the pursuit of a human right that promotes truth, regards what the truth actually is. This brings with it many philosophical dilemmas. For example, what is true for me might not be true for you. The concept of truth has been examined by philosophers for centuries, and is one of the central subjects in philosophy. It is also one of the largest (SEP, 2019), so we will not adequately cover it here.

Considering the Kantian logic earlier supporting this paper, a brief overview of a deontological vision of truth would suggest information at foremost should not be designed specifically to mislead or harm. The cornerstone of publishing ‘fake information’ is its falsity and its intention: it is intentionally or knowingly false and its purpose is to deceive the reader. In grossly over-simplified terms, a duty-based ethicist would argue that, even if falsity has the better consequences (as opposed to a consequentialist), it is still morally wrong to lie. A vision for truth in regards to misinformation is its intent. If it is intentionally or knowingly false and its purpose is to deceive, this violates the boundaries of truth-keeping.

Having looked at a broad ingredient of truth, being intent, we can narrow truth to the context of misinformation. In light of this, we will explore existing principles as close as possible to misinformation, those being espoused by international codes of journalism ethics such as The International Federation of Journalists (IFJ). The IFJ, press councils and self-regulatory systems openly support principles that echo truthfulness, accuracy, objectivity, impartiality, fairness, and public accountability (Gylfadóttir, 2017). In addition to these, I suggest the principle of transparency.

Arguably, fake information can avoid impeding autonomy in the company of transparency. The case for open and honest social communication was made forcefully by Kant arguing against secret treaties in ‘Perpetual Peace’ and Rousseau, whose ‘search for transparency sought not merely to reveal the truth of the world, but also to make manifest his own internal truth, his own authentic self’ (Jay, 1993). Examples of transparency in the West can also be found in the 19th century, not least the unprecedented investment in public libraries and museums in urban centres, which placed expertise, artefacts and records before the public gaze (Birchall, 2011). As a contemporary example, in South Korea, it is illegal to take mobile phone photos without the shutter sound of that picture being taken (Ja-young, 2012), evidencing an application of transparency and technology.

Practically, legislating on ‘fake information’ can lead to freedom of expression violations in very real ways, particularly considering the legal uncertainty and arbitrary application of the term (Regules, 2018). This is a clear concern, however, currently large media corporations are taking it upon themselves with ‘fact-checking units’ to remove or allow information without international legislature, putting at risk that decisions may be based on commercial interest, also at risk of violating personal freedoms. In light of this, one practical expression is education. Equipping and educating citizens with skills to identify, recognise and at least question obvious misinformation can assist in reducing its spread-power. In Italy, a high school programme has been launched in order to foster media literacy. Operating since October 2017, the experimental project aims to teach students how to identify suspect URLs, and how to verify new stories by reaching out to experts (Regules, 2018).

Regardless of these principle and practical examples, no solution has yet been found to solve the global challenge of truth in misinformation (Gylfadóttir, 2017).

I hope to have argued for the necessary review of human rights to include protection from misinformation, this could be termed ‘the right not to be misled’, or be additional to existing human rights. I have argued this as necessary to exercise personal autonomy and protect dignity. Next to this, I have explored the existing rights in relation to truth-keeping, but have discovered that the existing visions do not adequately cover misinformation. In response to a counterargument, understanding of what the truth is, I have made suggestions as to how truth might sit in relation to misinformation. From this I have suggested some antidotes to misinformation, including a commitment to transparency and non-deceptive intent. This exploration has certainly left me with many open questions, but I hope to have touched some key areas of the topic, and will no doubt continue to question the culture of misinformation. I hope you do too.

The Startup

Medium's largest active publication, followed by +479K people. Follow to join our community.

Lara Mikocki

Written by

Just a curious human.

The Startup

Medium's largest active publication, followed by +479K people. Follow to join our community.