Knowledge in the Balance

Catherine Morris
Media Studies COM520
20 min readDec 14, 2021

Weighing Disinformation against Censorship

The following is an essay designed to process the complicated relationship between disinformation and censorship.

https://docs.google.com/document/d/1eNX24smedjAJtuclIPSkfZgPpRVZHWwRSF6ushSU81Q/edit?usp=sharing

Introduction

In the age of the internet, it’s nearly impossible to avoid the effects and impact that global citizens can have on one another. Access to instantaneous online communication provides the tools for so many to connect with others, speak out against injustices, find information, and navigate the world in countless ways. Even with all the potential benefits, though, it’s become increasingly important to consider possible harms and dangers that come from fast and unimpeded access to other people. For many, the early days of the internet meant publishers and stakeholders were no longer the gatekeepers of information or discourse. As the internet grew and developed, becoming ever more interactive and user generated, governments and corporations had more interest in online censorship. At the same time, because it became easier for everyone to share and spread personal opinions and arguments, disinformation thrived. In the past several years, the topic of internet and social media regulation has exploded and become a centerpiece of literature in media studies.

The reason disinformation continues to be relevant and solutionless is its complicated relationship with censorship. It’s easy to claim that disinformation online should be eliminated, but what follows is a more complicated spiral of questions that this essay attempts to address. Who gets to decide what’s true and what counts as knowledge? This question considers the current state of epistemology. Is scientific and technical inquiry still the standard of expertise and knowledgeability? This question aims to address the relationship between science and the public as it currently stands. How do power, money, and personal interest impact the decision-making process when it comes to disinformation and censorship? The intention here is to investigate the ethical point of view as it applies to potential harms of this conflict. Before addressing these questions, I will provide some background knowledge on censorship and disinformation, particularly what they mean in this context and why it’s worth talking about. Following the main discussion points, this essay will review some existing attempts at a solution and provide some questions for future consideration.

Censorship

Censorship occurs whenever there is suppression of ideas or communication, which is usually justified as an effort to prevent offense (American Civil Liberties Union). This may be the greatest threat facing democracy; an inability to express dissent squashes the possibilities of independence and freedom. The powers that be have attempted to manage the flow of information for centuries. Known examples of censorship reach as far back as 443 BC in Rome and 300 AD in China (Pascale, 2019; Newth, 2010).

Censorship is often portrayed as an act of violence perpetrated by dictators and authoritarian rulers, a portrayal which isn’t false but certainly misrepresents the modern reality of the situation. Alongside unjust governments, censorship is also utilized by capitalist systems and democratic nations (Pascale, 2019; Getzkow, 2006; Mullainathan, 2005). Any time silence is beneficial for profit, censorship is likely to be involved.

The ideals of free speech, and specifically freedom of the press, are called into question when governments or corporations begin to censor any form of communication. Especially in a country that brags extensive freedom like the United States, citizens might be shocked to find out that, in a report by Reporters without Borders, the U.S. is considered one of the most dangerous places for reporters (Pascale, 2019; Noack, 2019).

Free speech has its roots in the public forums of early democracy in ancient Greece. A founding principle of democracy was that no one should be prevented from presenting their point of view; never was it guaranteed that a person would succeed in convincing others, but at the very least, they had the right to be heard. Censorship, therefore, interferes with the pillars of a fair democracy. Without the freedom to express one’s own ideas, values, opinions, or beliefs, the individual is at the mercy of the existing hegemonic authorities.

Disinformation

A second foundational pillar of democracy is an individual’s right to informed decision-making free from coercion. This is why persuasion, rhetoric, and propaganda have come to carry negative connotations. In a modern-day fair and just society, citizens should have access to information without unnecessary barriers. One such barrier that interferes with democracy in a way unique from censorship is disinformation. The equivalent of the term ‘disinformation’ was coined by Stalin in 1923 to describe false information spread purposefully and systematically to intentionally confuse or mislead (Pascale, 2019). This is how disinformation and misinformation, terms often used interchangeably, differ; misinformation is unintentional by nature, whereas disinformation is produced or spread with knowledge of its falsity.

The challenge posed by disinformation is its complicated nature. Rarely is disinformation an isolated lie. More often, it’s combined with true or reasonable information. Otherwise, it may be stated in such a way that the problem is in the interpretation and the discourse created, not the facts themselves (Pascale, 2019; Longhi, 2021). When verifiable facts and aspects of reality are turned into debatable talking points, this becomes an obstacle for people of a democracy trying to make informed judgments and decisions. In addition to interfering with individuals’ judgment and being informed, disinformation also strains the relationship between citizens, government, and media once they become aware that it exists. What should be trusted authorities and sources of knowledge lose their value to consumers once people feel that their trust has been violated (Nyhan & Reifler, 2010; Shu et al., 2019).

As disinformation presents itself as truth, reality becomes negotiable and people struggle to sort facts from fiction; the result is that the democratic process becomes endangered (Pascale, 2019). The effectiveness of democracy hinges on the public’s ability to identify desirable policies and their willingness to participate in this process, and the constant burden of misinformation and disinformation drains both engagement and capacity (Kahne & Bowyer, 2016).

The tension between censorship and disinformation leads us to question how we can ethically balance what seem to be conflicting aspects of the same ideal. This shared ideal is a life with access to information and education, the right and ability to influence one’s own situation, and to avoid oppression or abuses of power. Both censorship and disinformation interfere with this, but solving one problem seems to face us with the other. Attempting to ban and prevent disinformation forces us to consider the implications of censoring people, even if it’s with good intentions. Preserving freedom of speech and avoiding censorship might lead us to consider at what point public safety and harm prevention become more of a priority than individual freedoms.

Epistemological Perspective

The first question to address in striking a balance is this: who gets to decide what is true and what counts as knowledge? Through an epistemological lens, the challenge between censorship and disinformation stems from disagreement about what is true and factual. In an increasingly complex society with unquantifiable possible motivations, it’s challenging for the average person to separate what’s true from what’s being presented as true for the sake of an agenda. Proctor (2011) explains that both institutions and governments are challenging and questioning the epistemological approach of our society, including the way that knowledge is generated and validated.

In the past, particularly the pre-internet era, there was a certain level of trust placed in scientific and educational institutions to be the knowledge-creators of society. The environment for inquiry and research was academia. In the present day, companies, governmental organizations, and countless other independent institutions contribute to bodies of research and proposed knowledge. With abounding options for research sources comes the possibility of conflicting information. When presented with such a contradiction in the current polarized political environment, people have to make their own judgements about what’s true, and the outcome is often that people judge information to be true when it aligns with their preexisting beliefs (Kahne & Bowyer, 2016). Some have called this trend confirmation bias, while others might refer to Muzafer Sherif’s social judgment theory, a theory of persuasion that claims that people accept or reject new information depending on where they already stand rather than an honest evaluation of the information.

An often proposed solution to combat disinformation without infringing on freedom of expression is to increase the public’s ability to identify disinformation and evaluate sources more reliably. In places where this goal has become part of the curriculum, this responsibility has fallen on educators who have limited time and resources to teach young people how to identify legitimate sources and how to read critically enough to identify bias and determine the truthfulness of claims. Because of extreme partisan division, even the basics of argument analysis could be considered political. Applying logical fallacies to real political discourse, for example, might be met with resistance from a parent who holds the contested point of view firmly. Reactions to critical race theory in the past year has shown us just how divided people can be when it comes to the facts. After generations of textbooks avoiding the harsh realities of American history, many people have a fundamental misunderstanding of the past, which leads them to resist any effort towards accurate education or retributive justice (Loewen & Stefoff, 2019). Part of the problem is that, from a citizen perspective, it seems that educators have changed the facts. These angry parents didn’t conjure their beliefs of their own accord; it’s what they were taught in the past. Part of epistemology, however, is being able to accept new information and hold it, in conjunction with existing knowledge, as part of the truth. Instead, “highly partisan individuals and groups have their own facts,” and depending on what political group a person identifies with, perceived reality changes (Kahne & Bowyer, 2016). The result of this is that people feel they can simply disagree with experts and educators.

As far as governmental policy goes, a central tenet of democracy is the idea that elected officials make educated and informed judgments on behalf of their constituents. What’s currently interfering with this is the combination of money and special interests. Most politicians have no choice but to consider their campaign investors’ interests if they intend to keep their positions, which means that decisions are made with corporations in mind as well as the constituents they were elected to represent. To that end, when lawmakers consider taking action against disinformation, they have to decide what actually qualifies as disinformation, which involves making official judgments about what is true and accurate. A truly objective judgment would be driven solely by a desire for accuracy. When it comes to motivated reasoning and biased logic, individuals are more likely to avoid these mistakes when they are focused on accuracy above all else. People tend to process information more carefully, with more depth, and use more cognitive effort when they are aiming for accuracy (Kahne & Bowyer, 2016; Kunda, 1990). Considering the many objectives a policymaker must consider in the process of developing policy, the likelihood of complete accuracy is slim. The resulting difficulty is that the people with enough power to enact policy regarding disinformation are not in an objective position.

Scientific Perspective

Following the question of who gets to decide what counts as knowledge is the question of what counts as evidence. The approach for finding evidence and answering research questions has always been the scientific method. Current literature shows, however, that even science has been a victim of the partisan divide (Kraft, Lodge, & Taber, 2015). Beyond partisanship, personal beliefs and worldviews have created significant competition as well. When scientific results and valid evidence conflict with someone’s beliefs, they are unlikely to defer to the research so long as they can readily reason their point of view (Kraft, Lodge, & Taber, 2015). For example, if their religion provides them with a satisfying explanation, the likelihood that they will even consider the scientific evidence is slim, especially if they know the science is going to contradict the beliefs they hold.

As previously mentioned, governmental organizations and educational institutions have long been the trusted sources of information and knowledge. However, various industries have undermined this system. Corporations who have something to lose from the spread of research results have undermined the scientific process and worked to remove public trust (Reed, et al., 2021). Real science involves “procedural scrutiny” to ensure the most effective, efficient, and accurate results are consistently being produced. Corporations and other organizations that have interest in science denial have framed this constant inquiry as flip-flopping, inconsistency, and doubt (Reed, et al., 2021). As was the goal, the public loses trust in the scientific fields and their intentions and abilities. This general mistrust means that people are sometimes making questionable judgments because they are disregarding legitimate evidence. A qualitative study (Wharton-Michael & Wharton-Clark, 2019) that looked at research methods of mothers against vaccines and found that many of them searched specifically for information that would validate their point of view instead of searching for the best applicable information. One of the mothers even cited avoiding any .gov or .edu domains; this shows a huge mistrust of the organizations that should really be the most trustworthy.

The most difficult part of combating these efforts is that there are people with legitimate credentials who have abused their positions to produce faulty research and evidence. Unfortunately, it’s not illogical to reject a field that has failed the public before, especially when there seem to be plenty of contradictions and disagreements amongst experts. These contradictions come from two places: corporations creating ‘industry friendly’ research to support their profitability and legitimately questionable research practices (Pascale, 2019; Defranco & Voas, 2021). Some examples of research manipulated by industry include the journals and scientific societies created by the tobacco industry to prove that smoking is safe (Proctor, 2011) and climate change denial research funded by the energy industry (Goldenberg, 2015). Even outside of industry influence, one study found that 33.7% of scientists surveyed admitted to questionable methods at least once in their career (Defranco & Voas). This was only one study, and neither their scope nor methodology were detailed, which leads me to suspect some bias in the study, but nonetheless, people concerned or suspicious about scientific integrity are likely to find these results alarming.

Between intentional undermining and legitimate room for error, people’s doubts aren’t unfounded. The result is that many people doubting official scientific reporting feel the need to take things into their own hands, whether that be with an online search or conducting what they believe to be valid research. What’s interesting is that people aren’t rejecting scientific evidence itself; what they’re rejecting is the institutions that try to provide the evidence. For example, when Notre Dame caught fire, there were conspiracy theories before the incident had even been resolved, including one that Muslims had intentionally started the fire (Wikforss, 2020). To ‘prove’ the theory, people used torches and fences at home to show how difficult it is to start a fire, which in their minds, proved that it could not have been accidental. While this doesn’t consider the limitations of their experiments as legitimate science would, it is an attempt at scientifically finding the answer. Most human knowledge comes from our own experiences, and it can be particularly challenging for people to accept evidence that is contrary to our own experience (Wikforss, 2020). Part of what has allowed this approach to thrive is the current political environment.

In 2017, the presidential administration effectively banned seven words from the CDC’s upcoming budget meeting, two of which were “evidence-based” and “science-based” (Pascale, 2019; Sun & Eilperin, 2017). The idea that an organization focused on disease control and public health can’t use words involving evidence and science is absurd enough in and of itself, but the implications go beyond the budget meeting. What this does is effectively take legitimate justification off the table when two points of view meet. Without being able to argue that a point of view is evidence-based or science-based, it is forced to be ‘equal’ to the contrary point of view, which may have no basis in fact (Pascale, 2019). As Pascale (2019) points out, this means that religious views and so-called informed opinions are essentially equal to extensive scientific research. The problem with this is that truth is objective. If truth were subjective, then certainly an opinion might be a valid opposing viewpoint to a peer-reviewed research study. It’s entirely possible for science to be wrong, and that’s part of the inquiry process, but a fact doesn’t change just because an opinion contradicts it.

Ethical Perspective

With competing views of truth and reality, it becomes important to consider the motivations and interests that drive them. The perspectives contradicting a traditional epistemological viewpoint have been called post-truth, alt-fact, and fake news; essentially, the “controlling elite” can create their own version of reality when it serves their purpose (Berghel, 2021). The purpose driving those who spread disinformation, or alternate versions of reality, is usually based in money, power, or a specific ideology. Borowski (2019) claims that weaponized language, which includes disinformation, “involves four conditions: an elite that shapes a narrative, a barrage of slogans, a person or persons who serves as the charismatic face of the movement, and an insular approach to international relations” (Pascale, 2019; Borowski, 2019). The elites mentioned may include politicians seeking reelection and wanting to push their agenda, corporate stakeholders wanting to increase profits, and so on.

These disinformation campaigns effectively keep power in the hands of a select few by maintaining social unrest (Pascale, 2019). This is exactly what we saw from Russian disinformation campaigns in 2016 (Associated Press, 2018). Many people hear of Russian election interference and can’t imagine how foreign actors could have impacted the U.S. presidential election; they may be picturing tampering with polling machines or other seemingly unlikely tactics. The reality is that Russian interference was no more than constant provocation of the American people that created so much division and unrest that the election process began to lose any semblance of democratic civility. Had social media platforms been more effective in shutting down disinformation campaigns, the scope of interference might have been less significant.

While disinformation is hard to control simply because it’s hard to identify, it is also more complicated because censoring disinformation is effectively making a claim regarding truth. Historically, powerful institutions shutting down dissent has been a tool of oppression. From an ethical perspective, hesitation to remove harmful content can be justified by an interest in freedom of expression. Also in ethical consideration, though, is the question of responsibility to public health and safety. Disinformation and hate speech are “rooted in narratives that shape public perception and make the horror of systematic atrocities possible” (Pascale, 2019). In the years before atrocities like genocides, there is almost always a stream of dehumanizing language toward the targeted group that normalizes hatred and desensitizes the general public. From this perspective, it appears incredibly important to prevent the spread of disinformation before it becomes real-world violence.

Whether a government or corporation leans toward allowing disinformation or invoking censorship, the outcome might be positive or negative depending almost entirely on the intentions behind the decision. Earlier this year, Brazilian president Bolsonaro attempted to restrict social media platforms’ abilities to remove content that they had decided was disinformation (Nemer, 2021). The attempt failed, but it brought his actions into the international spotlight. His content had been removed repeatedly from YouTube and Facebook for spreading disinformation about COVID-19, and although his decree didn’t become law, he encouraged his followers to move to platforms with end-to-end encryption, eliminating the possibility for censorship (Nemer, 2021). This apparent commitment to freedom of expression comes from a dangerous and corrupt agenda.

A different perspective with the same goal comes from Singapore (Lu, 2019). Lu also argues against banning disinformation, but the motivation is entirely different. She argues that any form of censorship, even if it’s for public health and safety, sets a dangerous precedent in a country that already has a complicated relationship with censorship (Lu, 2019). It seems clear that governmental regulation is not an easy solution or one without risks. Lu concludes that democratic societies will continue to fight to learn how to hold platforms accountable for spreading disinformation without compromising civil liberties.

The platforms themselves also struggle with this complicated issue. Revelations from the Facebook whistleblower have highlighted the shortcomings of corporations in trying to deal with disinformation. Prioritizing profit and generally being unable to do anything significant about disinformation paints a bleak picture when it comes from one of the largest internet conglomerates. The CEO of YouTube presents a more optimistic point of view in an opinion piece she wrote for the Wall Street Journal. She asserts that it is possible for free speech and corporate responsibility to coexist, but she also acknowledges that companies, civil society, and governments are faced with the difficult decision of “where to draw the lines on speech” (Wojcicki, 2021). Clearly, the desire to avoid disinformation without inciting censorship is present but too complicated to have been resolved as yet.

Application

All of this information still leaves us with a major problem and no solution. How can we mediate disinformation without unjustly imposing censorship? As Wojcicki calls for, we must find an acceptable balance between the two. The first step in regulation needs to be a determination of ‘who’. Who can be trusted to decide what information is true or false, ethical or unethical, harmful or necessary?

Some governments have attempted to take on this role. As discussed, Singapore has introduced legislation to prevent disinformation. The EU has introduced extensive regulation for ‘big tech’ to regulate their power and monopolistic behavior (Amaro, 2020). The United States also has multiple forms of proposed legislation regarding online information regulation, none of which have yet become law. In Brazil, Bolsanaro attempted to outlaw this kind of regulation. With the international nature of social media platforms and the internet in general, it seems necessary and appropriate to make these decisions on a global scale. Felten and Nelson (2019) recommend “increased international collaboration” to “improve best practices.” They also call for policymakers to hold platforms accountable, specifically in terms of fact-checking and making sure they are basing decisions around advice from top experts and up to date information. Susan Wojcicki, CEO of YouTube, calls for similar collaboration in her recommendation that governments provide companies with the guidelines to work effectively. While this implies leaving legislation to the national level, she also makes the point that these laws should essentially work from the same principles, balancing free expression with harm reduction, and therefore should not extensively vary across countries (Wojcicki, 2021). A more direct suggestion calls for measures to uphold and enforce scientific integrity, transparency from lawmakers, and accountability for those who interfere and undermine (Reed, et al., 2021).

International nongovernmental organizations (NGO) have also attempted to contribute solutions. The Global Alliance for Responsible Media attempts to use the power that advertisers have over publishers and platforms to “[push] to improve the safety of online environments” (Montgomery & Li). While this is an interesting and probably effective technique, their project seems to be in the early stages. Their web page doesn’t provide any information about progress thus far or the standards they hold for online safety. Another NGO is making attempts in the direction of regulation as well, called the Global Internet Forum to Counter Terrorism. This organization is funded by Google, Microsoft, and Twitter, and is also partnered with Facebook and utilized by YouTube. They provide a variety of resources, like a campaign toolkit, as well as work toward developing and distributing solutions specifically for companies. The goal is to prevent exploitation of platforms while protecting human rights (Global Internet Forum to Counter Terrorism).

Platforms have attempted their own regulation as well. YouTube, previously cited for their optimistic outlook on finding such a balance, is already quick to take action when content seems to violate community guidelines, but their hope is that governments will create more clarity in guidelines while leaving them the flexibility to act when necessary (Wojcicki, 2021). The potential problem with leaving moderation up to the corporations is that it’s hard to imagine a situation in which a corporation truly places the best interests of society ahead of profits. The justification cited has been protecting free speech as a platform, but insiders have revealed that corporate interest has been prioritized over public safety even after their own research has revealed shortcomings (Vaidhyanathan, 2021; Allyn, 2021). On platforms like WhatsApp, end-to-end encryption means that regulation is impossible. With such variation across platforms, it seems unlikely that they’ll be a reliable authority in the matter.

The preceding discussion leaves us with some questions to consider and act on moving forward. First, how can we reliably and ethically come to an international standard of platform regulation that addresses disinformation without compromising freedom of expression? Once this standard is established, the follow-up question asks what authority will maintain and enforce the standard, with possible answers to consider being national governments, an existing international organization, or perhaps a global group specializing in civil liberties in an online, globalized context. Finally, from an ethical perspective, the question remains what authorities would have the power to interfere if the originally designated entity is displaying signs of corruption.

Bibliography

​​Allyn, B. (2021, October 06). Here are 4 key points from the Facebook whistleblower’s testimony on Capitol Hill. Retrieved from https://www.npr.org/2021/10/05/1043377310/facebook-whistleblower-frances-haugen-congress

Alvino Young, V. (2020). Nearly half of the Twitter accounts discussing ‘Reopening America’ may be bots. Carnegie Mellon University. Retrieved 12 December 2021 from https://www.cmu.edu/news/stories/archives/2020/ may/twitter-bot-campaign.html

Amaro, S. (2020, December 09). CNBC: Why the EU is getting tough on Big Tech. Retrieved from https://www.youtube.com/watch?v=l4T3oLlTHyE

American Civil Liberties Union. What Is Censorship? (n.d.). Retrieved from https://www.aclu.org/other/what-censorship

Berghel, H. (2021). The Online Disinformation Opera. Computer,54(12), 109–115. doi:10.1109/mc.2021.3107005

Borowski, A. (2019, May 16). Weaponized language. Retrieved from https://www.koreatimes.co.kr/www/opinion/2019/05/162_268778.html

Buckley, C., & Mozur, P. (2017, November 21). China’s Flashy Ex-Internet Censor Faces Corruption Investigation. Retrieved from https://www.nytimes.com/2017/11/21/world/asia/china-internet-censorship-lu-wei-corruption.html

Carter D (2014) Weapons of disinformation. Index on Censorship 43: 41–44.

Cavna, M. (2021, October 24). Analysis | Pittsburgh Post-Gazette fires anti-Trump cartoonist, and mayor says it sends ‘wrong message about press freedoms’. Retrieved from https://www.washingtonpost.com/news/comic-riffs/wp/2018/06/14/pittsburgh-post-gazette-fires-anti-trump-cartoonist-and-mayor-says-it-sends-wrong-message-about-press-freedoms/

The Center for Countering Digital Hate. (2021). The disinformation dozen. Retrieved 12 December 2021 from https://www.counterhate.com/disinformationdozenDefranco, J. F., & Voas, J. (2021). Reproducibility, Fabrication, and Falsification. Computer,54(12), 24–26. doi:10.1109/mc.2021.3055926

Dupuy, B. (2020). Reno doctor’s selfie hijacked to imply COVID is a hoax. The Mercury News. Retrieved 12 December 2021 from https://www.mercurynews.com/2020/12/02/reno-doctors-selfie-hijacked-to-imply-covid-is-a-hoax

Felten, C., & Nelson, A. (2019). Countering misinformation with lessons from public health. Center for Strategic & International Studies. Retrieved 13 December 2021 from https://www.csis.org/countering-misinformation-lessons- public-health

Facts, Post-Truth and Epistemology[Video file]. (2020, November 4). Retrieved from https://www.youtube.com/watch?v=m_-sqPBrFik

Gentzkow, M., & Shapiro, J. (2006). Media Bias and Reputation. Journal of Political Economy,114(2), 280–316. doi:10.1086/499414

Global Internet Forum to Counter Terrorism. (n.d.). Tech Innovation. Retrieved from https://gifct.org/tech-innovation/

Goldenberg, S. (2015, February 21). Work of prominent climate change denier was funded by energy industry. Retrieved from https://www.theguardian.com/environment/2015/feb/21/climate-change-denier-willie-soon-funded-energy-industry

Griffith, J. (2019, July 01). Canadian artist fired after viral Trump cartoon. Retrieved from https://www.nbcnews.com/news/world/canadian-artist-fired-after-viral-trump-cartoon-n1025071

Jalonick, M. C. (2018, May 10). Facebook ads show Russian effort to stoke political division. Retrieved from https://apnews.com/article/immigration-north-america-donald-trump-us-news-ap-top-news-dcbc0859dd324a55a87422b78ef1c362

Kahne, J., & Bowyer, B. (2016). Educating for Democracy in a Partisan Age. American Educational Research Journal,54(1), 3–34. doi:10.3102/0002831216679817

Kraft, P. W., Lodge, M., & Taber, C. S. (2015). Why People “Don’t Trust the Evidence”. The ANNALS of the American Academy of Political and Social Science,658(1), 121–133. doi:10.1177/0002716214554758

Kunda, Z. (1990). The case for motivated reasoning. Psychological Bulletin, 108(3), 480. doi:10.1037/0033–2909.108.3.480

Loewen, J. W., & Stefoff, R. W. (2019). Lies my teacher told me: Everything American history textbooks get wrong. New York: The New Press.

Longhi, J. (2021). Mapping information and identifying disinformation based on digital humanities methods: From accuracy to plasticity. Digital Scholarship in the Humanities,36(4), 980–998. doi:10.1093/llc/fqab005

Lu, D. (2019). Dont ban fake news. New Scientist,242(3232), 23. doi:10.1016/s0262–4079(19)30975–3

Montgomery, J. C., & Li, F. L. (n.d.). Global Alliance for Responsible Media (GARM). Retrieved from https://www.weforum.org/projects/global-alliance-for-responsible-media-garm Nemer, D. (2021). Disentangling Brazil’s Disinformation Insurgency. NACLA Report on the Americas,53(4), 406–413. doi:10.1080/10714839.2021.2000769

Mullainathan, S., & Shleifer, A. (2005). The Market for News. American Economic Review,95(4), 1031–1053. doi:10.1257/0002828054825619

Newth, M. (2010). The Long History of Censorship. Retrieved December 12, 2021, from http://www.beaconforfreedom.org/liste.html?tid=415&art_id=475

Neylan, J. H., Patel, S. S., & Erickson, T. B. (2021). Strategies to counter disinformation for healthcare practitioners and policymakers. World Medical & Health Policy. doi:10.1002/wmh3.487

Noack R (2019) The Pulitzers will honor journalists working in one of the most dangerous countries for them: The United States. The Washington Post.

Nyhan, B., & Reifler, J. (2010). When corrections fail: The persistence of political misperceptions. Political Behavior, 32(2), 303–330.

Pascale, C. (2019). The weaponization of language: Discourses of rising right-wing authoritarianism. Current Sociology,67(6), 898–917. doi:10.1177/0011392119869963

Proctor, R. (2011). Golden Holocaust: Origins of the Cigarette Catastrophe and the Case for Abolition. Berkeley: University of California Press.

Reed, G., Hendlin, Y., Desikan, A., Mackinney, T., Berman, E., & Goldman, G. T. (2021). The disinformation playbook: How industry manipulates the science-policy process — and how to restore scientific integrity. Journal of Public Health Policy,42(4), 622–634. doi:10.1057/s41271–021–00318–6

Rutenberg, J. (2017, August 24). Trump Takes Aim at the Press, With a Flamethrower. Retrieved from https://www.nytimes.com/2017/08/23/business/media/trump-takes-aim-at-the-press-with-a- flame thrower.html

Shu, K., Wang, S., & Liu, H. (2019). Beyond news contents. In Proceedings of the twelfth ACM international conference on web search and data mining.

Sun, L. H., & Eilperin, J. (2017, December 15). CDC gets list of forbidden words: Fetus, transgender, diversity. Retrieved from https://www.washingtonpost.com/national/health-science/cdc-gets-list-of-forbidden-words-fetus-transgender-diversity/2017/12/15/f503837a-e1cf-11e7-89e8-edec16379010_story.html

Vaidhyanathan, S. (2021, July 02). What If Regulating Facebook Fails? Retrieved from https://www.wired.com/story/what-if-regulating-facebook-fails/

Vick, K. (n.d.). TIME Person of the Year 2018: The Guardians. Retrieved from https://time.com/person-of-the-year-2018-the-guardians/

Wharton-Michael, P., & Wharton-Clark, A. (2019). What is in a Google search? A qualitative examination of non-vaxxers’ online search practices. Qualitative Research Reports in Communication,21(1), 10–20. doi:10.1080/17459435.2019.1680572

Wojcicki, S. (2021, August 01). Opinion | Free Speech and Corporate Responsibility Can Coexist Online. Retrieved from https://www.wsj.com/articles/free-speech-youtube-section-230- censorship-content-moderation-susan-wojcicki-social-media-11627845973

Yang, J., & Tian, Y. (2021). “Others are more vulnerable to fake news than I Am”: Third-person effect of COVID-19 fake news on social media users. Computers in Human Behavior,125, 106950. doi:10.1016/j.chb.2021.106950

--

--