InfoTester: Researching Disinformation Methodology Report

PJAIT
crossing domains
Published in
20 min readJun 22, 2022
Photo by Markus Spiske on Unsplash

The project “InfoTester — Development and verification of original methods of vertical artificial intelligence for automatic and precise detection of disinformation” is carried out as part of the Infostrateg competition National Center for Research and Development by the Medical University of Bialystok together with Science4People and the Polish-Japanese Academy of Information Technology.
As part of the project, the INFOTESTER crowdsourcing platform will be created, around which the community of Polish fact-checkers involved in the fight against disinformation in various areas of social life will develop.

Introduction

The purpose of this report is to describe the work methodology of disinformation analysts working within the InfoTester project. The development of this methodology was the first necessary step in the implementation of the above-indicated project. The methodology was developed by analysts employed in the project, based on their experience as experts, scientific knowledge available in the subject, as well as the experience of other institutions and organisations involved in research and detection of disinformation.

The analysis carried out by InfoTester project experts is at all times based on a transparent methodology. In addition, the entirety of the operation of experts is aimed at working in the spirit of the principles set out by the International Network of Fact-Checking Organisations, i.e .:

  • Impartiality and fairness — organisations test each policy option equally. They follow the same rules, regardless of whose statement they verify. Importantly, they do not take a position on the issues they check.
  • Transparency of sources — organizations disclose data and documents on which they base their analyses. Readers are also to be able to form their own opinion and follow the verification process. Except for the situations in which information source safety may be compromised.
  • Transparency of funding and organizational structure — organisations describe the professional background of all key individuals, structure, and legal status. They also disclose sources of their financing, while potential sponsors cannot influence the results of the analyses. It is also important to clearly state how readers can contact the organisation.
  • Transparency of methodology — organizations clarify on what basis they select materials for analysis. They describe the methodology they use to conduct research, write texts and make any corrections. They also encourage readers to report information that is worth being verified.
  • Commitment to open and honest revisions — organisations publish revision policies They need to improve their work in a transparent and clear manner, as well as ensure that readers see the most up-to-date version of the analysis.

Main Assumptions of the Methodology

The aim of creating a uniform methodology is to guarantee the identity of the assessments made by analysts and to minimise their subjectivity. The adoption of clear criteria for the analysis and the partial nature (gradability) of the assessment made also allows for a detailed description of the collected data, which in turn will allow for their in-depth analysis. The methodology is to allow an analyst not only to correctly indicate whether a given content is disinformative, [1] but also to describe this content in such a way as to enable the widest possible gathering of knowledge about disseminated disinformation, i.a. by identifying common features of disinformative articles. Thus, the methodology is to enable content analysis in the qualitative study regarding a set of articles. [2]

The analysis of articles is carried out mainly via debunking technique, with the auxiliary use of the fact-checking technique. These terms, for the purposes of this methodology, are defined in a manner analogous to the methodology developed for the purposes of the NATO Strategic Communication Centre of Excellence. [3] Fact-checking is the long-standing process of checking that all facts in a piece of writing, news article, speech, etc. are correct. Debunking refers to the process of exposing falseness or manipulating in a systematic and strategic manner (based on a chosen topic, classifications of selected techniques, narrative, intention, or the like). The overall goal is to minimize the impact of potentially harmful content by classifying it and identifying existing trends. The main goals of debunking are not only investigating the truth, but also cataloguing evidence of false information, exposing false information and conspiracies, and finding sources of disinformation.

The methodology will be used to evaluate an unknown set of articles coming from both credible as well as mixed/biased and potentially unreliable sources. The basic assumption is, therefore, its adaptability to the content that will be identified as essential and the possibility of excluding irrelevant content. The goal of the project is to analyse a minimum of 15,000 articles (5,000 containing disinformation and 10,000 containing reliable information). Selected articles will be then classified into 10 topics. Only articles classified under one of the selected topics will be analysed for disinformation.

The analysis consists of three phases of content assessment: overall assessment (topic definition, source analysis, author analysis, initial content analysis); detailed content analysis (with distinguishing the type of disinformation and eristic techniques used or fact-checking description); and the assessment of the author’s intentions and motivations.

Step 1: Overall Assessment

I. Thematic category (exhaustive list, single choice)

The assessment of the article begins with an initial content analysis during which the topic is determined, and it allows it to be classified (single choice) into one of 10 exhaustive thematic categories. Thematic categories were pre-defined a priori. The selection of topics resulted from a prior analysis of the work of fact-checking and debunking organisations, such as Snopes, “Counteracting Disinformation” Foundation, Demagog Association, and Debunk EU. It should be also noted that the pre-defined list of topics may be modified, depending on new trends in disinformation identified during the study.

If the content does not correspond to any of the defined categories, the option “Not related to the topic” is selected. The selection of this category ends the assessment of the article, and its content is not included in the study results.

  1. COVID-19
  2. Migrations
  3. LGBT+
  4. Climate crisis
  5. 5G
  6. GMO
  7. World War II history
  8. Energetics
  9. Pseudomedicine
  10. Women’s rights (sex education, contraception, IVF, abortion)
  11. Not related to the topic

II. Source Analysis (exhaustive list, single choice)

The subsequent element of the initial assessment is the analysis of the source, i.e. the domain in which the content has been posted. Assessing the credibility of a website requires an in-depth analysis of the content posted on it regularly, as well as checking it in reliable sources, including via the Media Bias/Fact Check search engine. [4] The analysis consists in selecting the category that best suits a given domain:

  1. Reliable — sources that are reliable/publishing reliable content on a specific topic, in particular traditional news portals.
  2. Unreliable — sources publishing unreliable content, typically disinformation, e.g. all domains financed by the Kremlin, sites containing conspiracy theories, etc.
  3. Mixed/biased — partially or potentially biased websites that may present false information on specific issues, e.g. typically far-right websites, and blog collections.

III. Author Analysis (exhaustive list, single choice)

Verification of the message sender is crucial in determining its credibility, as disinformation is largely disseminated by anonymous authors or people who use fictitious profiles. The goal of the next step is to analyse whether the content has a verifiable author, and it consists in choosing one of the three options:

  1. Anonymous — there is no information enabling the identification of the author (e.g. no signature or content written under a pseudonym); editorials.
  2. Unknown author — personal data provided does not allow for a broader identification of the person or the data may be potentially fabricated (e.g. people who cannot be found, such as Jan Nowak publishing on https://dziennik-polityczny.com/).
  3. Known author — content signed with personal data enabling a wider identification of the author

After selecting the known/unknown author option, the analyst enters the available data, in particular name and surname, in a separate field.

IV. Content Analysis (exhaustive list, single choice)

The next step requires analysing the entire content of the article and recognizing whether the information is true or of disinformative nature. If the article provides only factual information, it is marked as “reliable information”. Selecting this category ends the assessment of the article.

In a situation where information contained in the article is unreliable and misleads the recipients, the analyst determines whether the given content was disseminated with the intention of misleading or causing harm. The unintentional dissemination of false information is known as misinformation. The intention is critical to identify whether we are dealing with disinformation or misinformation. When a person provides information that is untrue with no intention of causing harm or even not knowing that it is false, we refer to that person's activity as misinformation. It should be emphasized, however, that even unintentional dissemination of false information without the goal of manipulating recipients can fuel disinformation. Disinformation is particularly difficult to detect as the author’s intention is usually not specified and in most cases, it can only be presumed. In the case of articles published by reliable sources of information and credible authors (recognizable journalists), the analyst assumes that given content is of misinformative nature and there is no intention to cause harm/spread falseness.

Overall, most definitions of disinformation combine four elements: (i) type of information; (ii) falseness of information; (iii) intention of the author; and (iv) consequences of disseminating information, including the personal (e.g. recipient views) and social (e.g. disruption of democratic processes) effects. [5]

For the purposes of this report, the definition of disinformation provided by the European Commission High-Level Group of Experts on False News and Disinformation on the Internet (HELG) will be used, as it covers all four aspects and does not exclude potentially harmful content presented in the form of political advertising or satire, as presented in the EU Code of Practice. The definition is as follows: “All forms of false, inaccurate, or misleading information designed, presented, and promoted to intentionally cause public harm or for profit.”[6] However, a necessary supplement to this definition is taking into account the European Union Code of Practice on Disinformation, according to which disinformation is defined as: “verifiable false or misleading information which, cumulatively, (a) is created, presented and disseminated for economic gain or to intentionally deceive the public; and (b) may cause public harm, intended as threats to democratic political and policy-making processes as well as public goods such as the protection of EU citizens’ health, the environment or security.”[7] The detected information must be verifiable, which means that it can be proved that it is untrue, and, therefore, it cannot be, for example, a yet unproven theory or opinion, as long as it is not intended to mislead the recipients.

In a situation where a given content is not verifiable (reliable/disinformative/misinformative), it is marked as the “Hard to say” category. Indicating this category ends the assessment.

Thematic categories:

  1. Credible information
  2. Misinformation — unintentional misleading by providing misleading or false information
  3. Disinformation — intentional misleading by providing misleading or false information
  4. Hard to say

Step 2: Detailed Content Analysis

Only articles flagged as misleading (categories of disinformation and misinformation) are subject to further analysis. Detailed content analysis is based on the study of the article text, both in terms of manipulation techniques used and in terms of straightforward false statements (fabrication). Various types of verification are required depending on the type of disinformation identified.

V. Classification within a group disinformation/misinformation (exhaustive list, multiple choice)

The nature of the disinformation disseminated can vary significantly and requires different analysis. Therefore, the first step in a detailed content analysis is to classify content into one of four categories. These categories are not disjoint as it is possible that both manipulation and falseness/fabrication may be found in a single disinformation article. The analyst can choose appropriate categories multiple times.

  1. Falseness/fabrication — all explicitly false content and articles containing fabricated documents/photos/quotes and the like.

Articles classified in this group require a fact-checking comment. The analyst’s task is to identify explicitly false (such as lies) or fabricated content (so-called false evidence of disinformation), and then to find arguments/sources refuting the false statement. Selecting this option allows for entering comments in the appropriate window, which must each time include indicating the part of the text containing falseness/fabrication and attaching a fact-checking comment and/or source (in particular scientific sources and articles from reliable organisations dealing with disinformation study, which carried out truthfulness analysis) that refute a given content. When citing articles from independent NGOs, the analyst must pay special attention to the organisation’s affiliation with the International Fact-Checking Network (IFCN).

Moreover, each individual analysis in terms of the credibility of the content should be based on an attempt to find the source of given information; if the information is not found directly, analogous or similar content should be searched for. Moreover, potentially false information should be confronted with available information coming from reliable sources, in particular from public institutions and scientific sources.

2. Manipulation — bending or distorting facts in order to prove one’s point or to influence someone else’s views and behaviour.

Manipulative articles are often not explicitly false but they contain incomplete statements, used in an inappropriate context, selected information, etc. In the further part of the analysis, it is necessary to indicate the appropriate manipulation technique, which will be described below.

3. Trolling/satire — posting controversial, often untrue content, showing certain phenomena in a caricatured, playful manner.

Disinformation articles including trilling/satire can be based on both falseness and manipulation. After selecting this option, an analysis adequate to the type of disinformation should be carried out.

4. Conspiracy theory — proposed explanation of an event assuming significant participation of a group of conspirators trying to hide the truth from the public. To exist, a conspiracy theory needs an official version of a given event, which the conspiracy theory refutes by proposing a true version in its place. [8] Any evidence that disproves a conspiracy theory can be interpreted as further evidence of the conspiracy's existence (self-sealing).

Understanding the purpose and motives behind creation and dissemination of conspiracy theories is of key importance for their identification. To this end, road signs for the analyst are the features of conspiratorial thinking identified by S. Lewandowsky and J. Cook: [9]

  • something must be wrong — secret plan, conspiracy, fraud;
  • immune to evidence — evidence against the conspiracy theory comes from a conspiracy;
  • overriding suspicion — extreme degree of suspicion makes it impossible to believe anything that does not fit the conspiracy theory;
  • contradictory — belief of the followers of conspiracy theories is absolute, it does not matter to them that their belief system is inconsistent;
  • persecuted victim — conspiracy theorists see and present themselves as victims of organised persecution. At the same time, they see themselves as brave antagonists taking on the villainous conspirators. Conspiratorial thinking involves a self-perception of simultaneously being a victim and a hero;
  • nefarious intent — the motivations behind any presumed conspiracy are invariably assumed to be nefarious. Conspiracy theories never propose that the presumed conspirators have benign motivations;
  • re-interpreting randomness — overriding suspicion found in conspiratorial thinking frequently results in the belief that nothing occurs by accident Small random events are reinterpreted as caused by a conspiracy and are woven into a wider, interrelated pattern.

Preliminary analysis of the motivations of the potential conspiracy theories creators also allows for their identification. At this point, the analysis should be carried out based on the assumption that the conspiracy constitutes an explanatory function. The purpose of such a mental construction is, therefore, inter alia: easy explanation of complicated processes and events, providing a simple answer, regaining a sense of control and security, mythologising the causes and background of events, political change — strategic and program concept.

Choosing this category exempts the evaluation of the eristic techniques used. This is due to the fact that conspiracy theories use a wide range of eristic techniques, including an appeal to emotion, exaggeration, cherry picking, oversimplification, post hoc ergo propter hoc, anecdote, quote mining, ambiguity, strawman, argumentum ad populum, false analogy, false dilemma, false experts. It would be excessively time-consuming to identify them all.

VI. Eristic techniques used for disinformation (exhaustive list, multiple choice)

Each time, during the analysis of manipulative content, it is necessary to indicate what types of manipulation (eristic techniques) were used by the author. In most cases, several techniques are used in one disinformative article. The role of the analyst is to identify them all. For the purposes of this methodology, a set of the eleven most common eristic techniques used in the manipulation was developed. In the case of identifying a technique that is not on the list, it should be indicated it in the “Comments” field at the end of the analysis performed. The list of selected techniques includes:

  1. Cherry Picking — presenting information using only data supporting a given thesis, while ignoring the wider context.
  2. Quote Mining — using a short fragment of someone’s longer speech in a way that significantly distorts its real, original tone.
  3. Anecdote — the use of evidence in the form of personal experience or an isolated case, possibly rumour or hearsay, most often to discredit statistics.
  4. Whataboutism — responding to a substantive argument not by addressing the heart of the matter, but by raising a new point that is unrelated to the topic under discussion. Often referred to as tossing a false lead to distract attention from the topic (Red Herring). Technique typical of Russian propaganda.
  5. Strawman — distorting someone else’s argument in a way that makes it easier to refute it. It usually comes down to assigning the opponent a position that they does not share at all.
  6. Leading Questions — flooding the recipient with a series of consecutive suggestive questions or putting them together leads the recipient to a predetermined thesis.
  7. Appeal to Emotion — the use of words and phrases that are to arouse in the recipient extreme emotion and attitude to the presented matter.
  8. False Cause — assuming a cause-and-effect relationship solely on the basis of the observed correlation.
  9. Exaggeration — overrepresenting a phenomenon or oversimplifying it, using slippery slope, magnified minority, or blowfish.
  10. Reference Error — referring to fake experts, propaganda statements by politicians, anonymous entries in social media or false quotes from famous people in order to authenticate the presented thesis.. Also, putting up false choice and presenting false analogies.
  11. Misleading Clickbait — giving the text a title that does not reflect information presented in the article, often even contradict it.

Step 3: Interpretation of Motivation and Intention

The study of the motivation and intentions of the disinformation content authors is potentially the most subjective element of the analysis, and therefore it is particularly important to develop precise components of the assessment. This will allow for maintaining uniformity of the analysis carried out by different experts.

VII. Motivation (exhaustive list, multiple choice)

The purpose of marking the probable intention of the author during the analysis is to allow for examination of the disinformation phenomenon in a broader manner, and in particular, it is an attempt to identify what the authors of unreliable content are guided by. Motivation is also a determinant that allows for determining what kind of disinformation we are dealing with.

  1. Internal political (left/right/liberal)

Internal political motivations are characterized by a strong marking of content against one political option. They can be directed against both a political minority (opposition) and the majority (currently in power). Typical of this type of motivation is a clear definition of one’s political group (an affiliation of the author/author’s views), which allows for indicating their motivation. In addition, it should be noted that disinformative content is more often promoted on websites promoting the so-called far-right views. [10]

2. External political (acting in favour of external political forces, e.g. Russia)

External political motivations are usually identified by the nature of the website/domain that publishes them. It is typical in the Polish information space to promote this type of content through websites financed directly or indirectly by Russia or Russian entities (platforms such as sputniknews.com, dziennk-polityczny.pl). An additional premise indicating the above-mentioned motivation is the promotion of content that supports the demands of foreign countries, which at the same time may harm Polish state interests. Motivated in an external political manner is usually any kind of propaganda that weakens international alliances (such as the EU or NATO), as well as ridicules or weakens the defense capabilities of a given country. Typical of this type of disinformation is its promotion by authors who have connections with foreign countries or people who were accused of collaborating with them in the past (e.g. Konrad Rękas or Mateusz Piskorski). Moreover, this disinformation is spread under false pseudonyms by unknown authors (e.g. Jan Nowak publishing at dziennik-polityczny.pl, or Antony Ivanovitz publishing on blogging portals).

3. Economic/financial

Economic/financial motivation is identified through contextual analysis. In the case of this type of disinformation, it is characteristic to promote among recipients specific products (e.g. encouraging the purchase of a specific supplement/drug under an article that incorrectly informs about its effectiveness) or by encouraging contributions to the activities of an organization/author disseminating disinformative content (such a practice can be found not only in social media but also on websites, e.g. legaartis.pl).

4. Gaining popularity

Several authors disseminate false content in order to gain popularity. This motivation can occur both in the case of misinformation (e.g. a popular portal that provides false information by accident, wanting to be the first to inform about an event without first checking its truthfulness), and disinformation (usually through social media accounts of “influencers”). This motivation is often related to another, e.g. economic/financial, motivation. This motivation can also be recognized by the fact of publishing false content under one’s own name (e.g. websites of doctors who promote alternative treatment methods). It also happens that false information is disseminated for this reason on Internet forums/blogospheres as well, in order to gain an appropriate position within a given environment. This motivation usually does not occur when an article is published anonymously.

5. Social (activity for the benefit of specific social movements)

Social motivation is extremely important in the case of disinformation because it is based on the formation of groups and opposition to them, creating a division of “us/them” (e.g. “us” who do not believe in vaccinations / “they” vaccinating themselves and their children). This motivation often comes down to antagonizing the society, confirming or reinforcing the erroneous views of a given group or falsely refuting the arguments of opponents. Its aim is also to strengthen faith in the false beliefs of one social group.

6. Chosen one/saviour complex (ego building, willingness to make people aware of the ‘truth’) (disinformation cannot be based solely on this motivation, its main application is misinformation)

Motivation based on the so-called the chosen one complex is typical of authors of conspiracy theories. However, it also occurs when a given author promoting false content has a recognisable group of recipients who are convinced of the truthfulness of the statements promoted by that person. Usually, it involves recognizing the existence of a (false) conspiracy consisting in accusing public institutions or scientists of concealing the truth.

7. Fear (disinformation cannot be based solely on this motivation, its main use is misinformation)

False information that is disseminated out of fear should usually be categorised as misinformation. The author believes in false information they transmit and disseminate it to warn other recipients. Some conspiracy theories are also spread out of fear (authors’ concerns about the existence of global conspiracies)

8. Need for simple explanations (disinformation cannot be based solely on this motivation, its main use is misinformation)

As in the case of fear, false information being disseminated due to the author’s internal need for the existence of simple explanations suggests that the content is of misinformative nature. This motivation is also typical of conspiracy theorists (the existence of a global conspiracy explains several social turbulence and complicated problems). However, it can also be found in disinformation in a situation where a given author/domain, promoting extreme views, simplifies the message so as to lead to generalisation.

VIII. Intention (exhaustive list with the possibility of modification, multiple choice)

The intention of this methodology is for the author to make the audience believe the false message. It is more than a narrative that is promoted through specific content as it defines the broader intention of the author. The purpose of false messages is not only to persuade recipients to believe in one specific message (e.g. that COVID-19 vaccines contain dangerous graphene) but to evoke a broader emotional response (e.g. denying the safety of vaccinations). The classification of fake content in terms of the author’s purpose/intention allows for the classification of groups of articles in such a manner as to recognise trends in disinformation and, as a result, understand what content, how, and by whom is promoted in specific disinformation trends. The selection of intentions was established a priori, but it is modified in the course of the analysts’ work, as it must be adapted to specific topics. However, it is not directly of open nature (it is not possible for a single analyst to add intentions). All modifications result from teamwork. After more than one person identifies the need to detail/add an appropriate intention in connection with the identification of a new trend. In most cases, intentions have sub-intentions (in the form of multiple-choice drop-down lists) that aim at refining the identified intention/message.

  1. Undermining the credibility of public institutions
  • Undermining the methods of counteracting the pandemic (quarantine, etc.)
  • Reproaching violations of human rights
  • Reproaching manipulation of disease incidence statistics
  • Reproaching manipulation of death statistics
  • Negating defense capabilities
  • Reproaching the spreading Nazism by Ukraine
  • Undermining the methods of counteracting migration crisis
  • Undermining methods of counteracting climate change

2. Changing electoral convictions

3. Undermining the international position of the state (e.g. allegations of interfering in internal affairs of Belarus)

4. Undermining an international organisation/its decisions (EU, WHO, UN, NATO)

  • Blaming NATO/UN/EU for conflicts in the region
  • Accusing of aggression against Russia and its allies
  • Undermining defense capabilities
  • Reproaching breaching international treaties
  • Discrediting international climate agreements

5. Presenting a new version of ‘historical truth’

6. Weakening of international alliances (Poland-NATO, Poland-Ukraine, etc.)

7. Promoting social stereotypes/antagonisms

  • Enhancing homophobia
  • Enhancing transphobia
  • Enhancing xenophobia
  • Enhancing xenophobia in connection with the economic situation
  • Enhancing xenophobia in connection with security
  • Enhancing xenophobia in view of the health situation
  • Enhancing religious conflicts
  • Enhancing anti-Semitism
  • Enhancing antagonism in connection with beliefs about the pandemic
  • Enhancing hatred directed at medics
  • Accusing of Russophobia
  • Enhancing antagonisms in connection with beliefs about the climate crisis

8. Denying scientific facts

  • Undermining the existence of COVID-19
  • Downplaying COVID-19
  • Undermining vaccination safety and effectiveness
  • Undermining the credibility of diagnostic methods
  • Undermining the safety/reasonability of wearing a face mask
  • Undermining the legitimacy of lockdowns
  • Searching for alternative methods of treatment
  • Undermining safety of 5G networks
  • Undermining climate change
  • Undermining human impact on climate change

IX. Evoked emotions (exhaustive list, multiple choice)

The indirect purpose of spreading disinformation is to arouse extreme emotional reactions in the recipient. The use of such techniques and content that evokes strong emotions also allows for faster and more effective promotion of fake content. Identifying what kind of emotional reaction the author of disinformation counts on may allow, in the future, for identifying what kind of content is disseminated using what kind of emotions (e.g. linking propaganda directed against the European Union with feelings of national pride). The analyst can choose from a catalogue of 6 typical disinformation emotions, which can be expanded after consultation with the team.

  1. Fear/sense of threat
  2. Anger
  3. Opposition/rebellion
  4. Uncertainty/sense of confusion
  5. Hope (false hope)
  6. Pride

Step 4: Comments

The last element of the assessment is the possibility for the analyst to enter a general comment on the content in the comments field. The field also enables the collection of other information relevant to the analysis, in particular: new fake experts cited by the authors of the disinformation; new keywords (usually neologisms or offensive expressions) that indicate disinformative nature of a given content; information about the structure of the text, such as punctuation and grammar errors or automatic translation of the text using, for example, Google Translate; suggestions for a new topic or a new intention and others.

Double content verification

90% of the assessed content is subject to the process of the so-called double verification. It involves a second, independent assessment of the same content by another analyst. The analyst does not read the first performed assessment, but only evaluates the content according to the methodology itself. The analyst then compares the two performed assessments and makes the final decision on the choices made in the analysis process. Discrepancies spotted by the double-verification analyst are discussed by the team. Then, a common, consistent approach to content classification is established. The final registered assessment may therefore include first and second assessment elements if so determined by the team. In addition, comments entered in the comments field may be discussed in the team, e.g. the need to add a new intention/sub-intention. The purpose of double verification is therefore not only to avoid the so-called human errors but also to the standardisation of the methodology application.

[1] The definition of disinformation used for the purposes of this report and the methodology is described in “Step 1: Overall Assessment”.

[2] Flick U. (2013), The SAGE Handbook of Qualitative Data Analysis, SAGE; access: https://www.ewi-psy.fu-berlin.de/einrichtungen/arbeitsbereiche/qualitative_sozial-_bildungsforschung/Medien/58869_Flick__The_SAGE_HB_of_Qualitative_Data_Analysis_Chapter1_mapping-the-field.pdf [21.03.2022]

[3] Pamment, J & Lindvall Kimber, A 2021, Fact-checking and debunking: a best practice guide to dealing with disinformation. NATO Strategic Communication Centre of Excellence; access: https://www.stratcomcoe.org/fact-checking-and-debunking [21.03.2022].

[4] Media Bias/Fact Check is an independent website that evaluates the bias, the truthfulness of the facts and the credibility of media sources. Access: https://mediabiasfactcheck.com/ [21.03.2022].

[5] Bayer, Judit, Bitiukova, Natalija, Bard, Petra, Szakács, Judit, Alemanno, Alberto, Uszkiewicz, Erik, Disinformation and Propaganda — Impact on the Functioning of the Rule of Law in the EU and its Member States (February 1, 2019). European Parliament, LIBE Committee, Policy Department for Citizens’ Rights and Constitutional Affairs, 2019, HEC Paris Research Paper No. LAW-2019–1341, p. 24 access: https://ssrn.com/abstract=3409279 or http://dx.doi.org/10.2139/ssrn.3409279, [21.03.2022]

[6] High level Group on fake news and online disinformation, A multi-dimensional approach to disinformation, 2018, p. 3. Access: http://ec.europa.eu/newsroom/dae/document.cfm?doc_id=50271 [21.03.2022].

[7] European Union (2018), European Commission, EU Code of Practice on Disinformation. Access: http://ec.europa.eu/information_society/newsroom/image/document/2018- 29/msf_on_disinformation_17_07_2018_-_proofread_99F78DB7–9133–1655- 990805803CDCCB67_53545.pdf [21.03.2022].

[8] Szymanek Krzysztof. (2012). O teoriach spiskowych. [About conspiracy theories.] “Folia Philosophica” Vol. 30 (2012), p. 259–281

[9] Lewandowsky, S., & Cook, J. (2020). The Conspiracy Theory Handbook. Accessible at http://sks.to/conspiracy, p. 4–5.

[10] https://www.protocol.com/facebook-misinformation-far-right-politics

Authors:

Magdalena Wilczyńska — Human rights lawyer (LL.M.), team leader of disinformation experts at the Polish-Japanese Science Academy. PhD candidate.

Michał Pawela — Analyst of disinformation (FakeNews.pl; Infostrateg), deals with open-source intelligence and information analysis in international security. Involved in projects related to disinformation, political extremism, and information warfare.

Katarzyna Lipka — Biologist and fact-checker at FakeNews.pl and Infostrateg, specializing in anti-vaccine movement monitoring.

Mateusz Zadroga — A Cardinal Stefan Wyszyński University graduate in military history. Social activist, from December 2020 analyst at FakeNews.pl; Deals with social, political, and military issues, as well as conducting OSINT and coordinating a team of volunteers as part of the research on Russian disinformation.

--

--

PJAIT
crossing domains

Writer, editor and curator overseeing the Crossing Domains blog by the Polish-Japanese Academy of Information Technology.