Artificial Multiverse: Foreign Information Manipulation and Interference in Taiwan’s 2024 National Elections
Digital Intelligence Team/Doublethink Lab
Thank you to our part-time staff and interns for their dedication and hard work: 1T, Chipdale Chu, Copper, G, Jim, Lee Chen Han, Leo Tsai, Nan, YEN, Yi-Wen Chen, Yu-Hsien, and all M-hub part-timers and interns
(in alphabetical order)
Introduction
The concept of the multiverse has captured people's imagination since ancient times. In science fiction literature and movies, we often see stories of alternate universes where people like us have made different choices, leading to drastically different destinies. Physicists try to pursue and validate the theory of the multiverse to understand how the world we live in came to be and unravel life’s mysteries. However, one can argue that we are already living in a “cognitive multiverse” created by digital technology, from which it is difficult to escape.
In this artificial multiverse, information is no longer received and delivered at our discretion, but is fragmented, proliferated, and dominated by social media recommendation algorithms. Moreover, the information from abroad that is manipulated using malice and capital has further distorted our worldviews and perceptions. People in the same country can receive vastly different information, and their perceptions of the same topics can be considerably different, as if they were operating in different universes.
Taiwan, a democratic country that has long been affected by the People’s Republic of China (PRC)’s information manipulation, is one major example. In 2024, Doublethink Lab (DTL) launched the “2024 Taiwan Elections: Foreign Influence Observation Project,” in which we, together with more than a dozen partner organizations and scholars, conducted an in-depth analysis of the PRC’s information manipulation tactics and influence in Taiwan’s presidential and legislative elections. The results of the study reaffirmed DTL’s previous findings that the PRC has long been engaged in information manipulation, targeting specific groups in Taiwanese society, to polarize and spark conflict.
First, there are obvious differences in the political spectrum of Taiwanese society, which affect people’s perceptions of the current state of democracy and the electoral system. Second, there is a significant divergence among supporters of different political parties in terms of their satisfaction with and trust in Taiwan’s democracy. The lower the satisfaction and trust levels, the easier it is for individuals to agree with the narratives and messages of foreign information manipulation. Such individuals also tend to believe in many conspiracy theories and are skeptical of the US.
For example, domestic conspiracy theories often touch upon Taiwan’s judicial injustices, frauds, corruption, and other issues and place blame on specific groups. Meanwhile, US skepticism often promotes narratives such as the United States only using Taiwan to provoke the PRC and refusing to send troops to help Taiwan. Such narratives aim to undermine trust in the United States and weaken Taiwan’s international support. Even though some of these messages are biased and not based on facts, they still resonate considerably with certain voters, and have been spread through an array of manipulative practices such as networks of fake social media accounts.
The year 2024 is a major election year around the world, with more than 60 elections being held that will affect more than half of the world’s population, including in the United States, the European Union, India, Indonesia, Mexico, and Taiwan. Election interference from foreign countries is one of the major challenges democratic countries currently face. However, after the election, it is important to cross the cognitive divide and seek real communication, understanding, and dialogue with people of different political stances.
To defend against authoritarian interference in democratic elections around the world, Doublethink Lab analyzed the PRC’s role in twelve major information manipulation cases in the 2024 Taiwan election, summarized their tactics, techniques, and procedures, and analyzed their relevance to our pre-election polls. In the final section of this report, we provide recommendations for various stakeholders in Taiwan. We hope that this report will be an important reference for countries in responding to the PRC’s information manipulation, help them anticipate the PRC’s information manipulation practices, establish an effective research and analysis framework, and strengthen cross-regional cooperation to jointly build effective democratic defense mechanisms.
Key Findings
- According to the foreign information manipulation cases investigated during this election observation period, five key issues were targeted: government ineffectiveness, national defense and a Taiwan Strait war, cross-strait and diplomatic relations, controversial events involving political parties and figures, and democratic procedures. Before November, the operation was mainly focused on government ineffectiveness, national defense, and a Taiwan Strait war; however, after November, it was more focused on cross-strait and diplomatic relations. After December and until election day, the operation mainly focused on direct attacks on controversies involving political parties or figures. In the week before and after election day, the operation began to question democratic procedures to shake people’s confidence in the election results.
- PRC information manipulation in this election mainly distorted the public’s perception and degraded its adversary, the Democratic Progressive Party (DPP). Most of the targets were the DPP and its political figures. The pattern of the PRC’s information manipulation is to amplify existing narratives or conspiracy theories by sharing and re-posting them online through inauthentic accounts and targeting people in echo chambers. Information is mainly disseminated through mainstream social media platforms, and the type of content often includes memes or hashtags. In terms of developing such content, videos are more common than text or images.
- Compared to PRC information manipulation during the 2022 local elections, information manipulation during this election included much more AI-generated videos, such as virtual anchors with AI-generated voices reading scripts, or compilations of clips with AI-generated voiceover narrations. With AI technology, the cost and time of video production have decreased, and the production of video content for information manipulation has gradually become mainstream.
- The results of the nationwide telephone survey compared with the investigative results of the information manipulation cases showed that, first, many of the key issues that shaped KMT supporters’ presidential voting choices were issues that were amplified by PRC manipulation. Second, the PRC’s target audiences, including KMT and TPP supporters, tended to agree with the narratives and conspiracy theories amplified by the PRC’s operations. Third, the PRC chose to use social media and video-sharing platforms to disseminate narratives and conspiracy theories, and audiences who often use these channels to obtain political information tended to agree with the narratives and conspiracy theories amplified by PRC manipulation. There are indeed many significant correlations between information manipulation and the existing political attitudes of Taiwanese voters, and more research is needed to clarify the causal relationships.
Background
In response to the PRC’s interference and information manipulation during Taiwan’s 2024 presidential and legislative elections, Doublethink Lab assembled a number of organizations and individuals concerned with the election, including fact-checking organizations, civil society organizations, academics, and information security researchers, to launch the 2024 Taiwan Election: Foreign Influence Observation Project. This project followed the European External Action Service (EEAS) 1st Report on Foreign Information Manipulation and Interference Threats published in February 2023. The concept of Foreign Information Manipulation and Interference (FIMI) in this report served as the analytical framework. The European Union (EU) promotes this framework to leverage it in the information security field. The framework is used to promote a common language for further compiling regional and global threat information on foreign information manipulation and interference, as well as analyzing the Tactics, Techniques, and Procedures (TTPs) to produce corresponding strategies.
The project’s analytical approach documented evidence of information manipulation and analyzed the different stages of a threat actor’s attack to effectively understand patterns of information manipulation behavior. FIMI describes a type of manipulation carried out by both state and non-state actors that threatens or negatively affects values and political activity. These manipulative behaviors are executed in an intentional and coordinated manner by agents both inside and outside the country to sway public opinion. Due to differing legal frameworks across countries, these actions are often not illegal but can cause significant harm to democratic societies. Each FIMI case has an underlying message that often corresponds to important national or international events. Information manipulation exploits public interest in such events before the full details of them are known. Consequently, they influence public perceptions by setting up a narrative framework and occupying the information space to align public behavior with the attacker’s interests.
The Election Observation Project was undertaken from October 1, 2023, to January 31, 2024, and was divided into three stages: Observation of the Information Environment and Investigation into Major FIMI Cases, Threat Intelligence Analysis, and the Nationwide Telephone Survey. In the first stage, we collected data and investigated the dissemination process of information manipulation, which allows for the immediate capture of suspicious information and a deeper understanding of information manipulation cases. In the second stage, we analyzed the cases by conducting a threat intelligence analysis to understand the distribution of data for various issues and investigate the common information manipulation practices by the PRC. Finally, to assess the impact of information manipulation on society, we conducted a nationwide telephone survey to identify prevailing narratives of suspicious information and determine questionnaire questions. These questions were posed to assess the relationship between people’s agreement with information manipulation narratives and conspiracy theories, as well as their political inclinations and the sources from which they receive political information. These three stages allow for the immediate capture of suspicious information, the compilation of information manipulation cases, the analysis of possible information manipulation tactics used by the PRC, and the assessment of the impact of information manipulation on society.
Research Methodology
Observation of the Information Environment and Investigation into Major FIMI Cases
In this project, 17 interns were recruited to conduct daily observations and data collection of the information environment through crowdsourcing. This was done to identify suspicious information. in real-time, and spot occurrences of information manipulation as soon as possible. Relevant information was also shared with our program partners to fight against information manipulation.
Our observation covered all major social media platforms, audio-visual platforms, communication software, and websites, such as Facebook, X, Weibo, YouTube, TikTok, Douyin, and news websites. The first step in daily observations was list monitoring. Observers referred to Doublethink Lab’s long-term tracking list and monitored suspicious accounts, channels, and media websites on the list. Once list monitoring was completed, observers proceeded to the second step, namely conducting daily extensive observations to sift through a large volume of information potentially implicated in information manipulation. To effectively filter out irrelevant data and ensure that the recorded information was relevant, the observation focused on the following information characteristics:
- Disinformation or conspiracy theories related to elections or current events in Taiwan.
- Information containing common narratives of the PRC’s information warfare against Taiwan.
- Information involving Chinese actors as disseminators, including PRC officials, media, and influencers.
- Coordinated Inauthentic Behavior (CIB) on social media platforms.
- Information with high engagement levels or trending discussion topics.
Information that demonstrated one or multiple of these characteristics was considered suspicious, and observers documented such information in a shared spreadsheet to promptly share with project partners. Throughout the project, a total of 10,629 instances of suspicious information were logged. It’s crucial to note that not all suspicious information is considered a part of information manipulation.
These recorded instances served as crucial leads for investigators in the next phase to delve deeper into significant cases of information manipulation. To investigate the case studies, the project recruited 8 part-time investigators to employ Open Source Intelligence (OSINT) methods. This involved gathering and analyzing publicly available information from the internet or other non-confidential sources. Through OSINT investigations, relevant evidence was collected to ascertain if there were indications of foreign information manipulation in the dissemination of particular information. Investigators analyzed how attackers executed information manipulation and summarized their findings of information manipulation within each case.
To investigate major FIMI cases, the project adopted a set of criteria to identify such cases and allocate investigative resources accordingly. The five characteristics mentioned previously served as criteria for initiating case investigations, with two additional metrics added to assess the importance of a case:
- The content of the information significantly impacts the domestic community.
- The information aims to undermine the relationships between Taiwan and other friendly countries.
Furthermore, most cases of information manipulation include a distinct message, often in the form of a narrative. This message, conveyed through various means such as disinformation and conspiracy theories, aims to influence and manipulate the audience’s perceptions. Therefore, when investigating cases of information manipulation, investigators categorized information with similar content into the same case to identify the type of manipulation. In addition to writing descriptions of the information manipulation, structured data was also recorded to be used for the quantitative statistical analysis in the Threat Intelligence Analysis.
Threat Intelligence Analysis
This project used the concept of FIMI, the STIX codebook data format, and the DISARM analysis framework to record the activities of foreign information manipulation into metrics
and structured data that can be used to analyze the behavioral patterns of foreign information manipulation. FIMI, STIX, and DISARM are in the same format as the EEAS information manipulation analysis, which serves as the basis for threat information exchange and allows for a more thorough examination of information manipulation behavior.
In recent years, the EEAS has adopted the concept of Foreign Information Manipulation and Interference (FIMI) and the DISARM Framework for investigating and analyzing information manipulation, focusing on the behavioral patterns of attackers. The DISARM Framework draws upon the analytical structure used in the information security domain to analyze cyberattacks and applies it to the study of disinformation and information manipulation. The framework categorizes the common behaviors of attackers into Tactics, Techniques, and Procedures (TTPs). Tactics, as defined by the EEAS report, represent the milestones attackers aim to achieve; techniques are the actions taken to reach those milestones; and procedures are the collective actions taken by attackers to accomplish their goals. By dissecting these tactics and techniques, analysts can discern the motives and intentions of the attackers.
The DISARM Framework encompasses approximately 300 distinct information manipulation techniques, with detailed descriptions of their coding and characteristics publicly available on the website. The behaviors outlined in the DISARM Framework, along with other relevant information manipulation data, are recorded using the Structured Threat Information Expression (STIX) format. This structured data format facilitates information exchange among research institutes and enables a quantitative statistical analysis to be conducted. Consequently, multiple incidents of information manipulation can be aggregated and used for comparisons and calculations, thereby yielding valuable insights.
This methodology was utilized in the investigation and analysis of information manipulation in both the 2023 and 2024 reports by the EEAS. Doublethink Lab adopted this methodology to conduct a more thorough analysis of the behavioral patterns of information manipulation. CIB, which has been the primary focus of our previous research, represents just one of numerous information manipulation behaviors. This methodology offers a more comprehensive behavioral examination framework to analyze the various behaviors through which foreign entities implement information manipulation strategies.
Nationwide Telephone Survey
This project conducted a nationwide telephone survey to evaluate the scope and impact of the dissemination of manipulation narratives and conspiracy theories in Taiwan, by assessing the public opinion of Taiwanese society. By examining the findings from the survey alongside the results of the case studies, the project was able to delve deeper into foreign information manipulation, from their initiation to their effects.
As reported previously in Doublethink Lab’s research and observations, the primary objective of foreign information manipulation is to increase distrust in the government and the democratic system, escalate conflicts among supporters of different factions, and undermine Taiwanese society’s democratic resilience. To understand the scope and effects of foreign information manipulation, Doublethink Lab designed a nationwide telephone survey targeting Taiwanese citizens aged 20 years or older who were eligible voters. The survey was conducted from January 1, 2024, to January 10, 2024, and asked respondents about their attitudes towards various narratives and conspiracy theories. The questionnaire assessed the social situation in Taiwan by collecting respondents’ views on conspiracy theories, alongside crucial information such as political party preferences, presidential election voting choices, and information consumption habits. The quantitative analysis of the nationwide telephone survey has been thoroughly explored in another report by Doublethink Lab, titled “2024 Taiwan Election: The Increasing Polarization of Taiwanese Politics — Reinforcement of Conspiracy Narratives and Cognitive Biases.” This present report will specifically discuss the narratives and conspiracy theories of information manipulation from the telephone survey in relation to political inclinations and channels of political information.
The analysis of the questionnaires provided insights into the level of agreement among different groups in Taiwanese society regarding the narrative and conspiracy theories of information manipulation. On the other hand, the analysis of the OSINT case investigation results shed light on the actions taken by the perpetrators of information manipulation to influence Taiwanese society. By referencing the findings from both analyses, we could explore the relationship between the attacks of foreign operations and their effects on Taiwanese society.
Interim Summary
During this project, a total of 10,629 suspicious reports were recorded. It is important to note that not all of these reports were linked to information manipulation. For an in-depth analysis of the suspicious reports, please refer to “Appendix I: Analysis of Suspicious Information.”
Based on these suspicious reports, a total of 12 significant cases were investigated. By following the FIMI, DISARM, and STIX analysis methodologies, we coded the 12 cases, which resulted in a total of 24 information manipulation incidents for statistical analysis.
The analysis from the OSINT case investigation revealed how attackers of information manipulation attempt to influence Taiwanese society. By referencing the results of the nationwide questionnaire analysis, we could further explore the correlation between foreign information manipulation attacks and their impact on Taiwanese society.
Observation of the Information Environment and Investigations into Major Cases
Based on the suspicious data gathered through daily observations, this project identified and selected 12 significant cases for further investigation. By utilizing OSINT investigation methods, we collected relevant evidence to gain a deeper understanding of information manipulation within each case. Summaries were then written to delineate the traces of information manipulation present in these cases, as outlined in the following sections.
Case 1: Imported Egg Controversy
Starting in 2022, Taiwan experienced a reduction in egg supply due to avian flu and rising feed prices, which led to a rapid increase in retail prices and a shortage of eggs in the market. In response, the Ministry of Agriculture announced subsidies on imported eggs in March 2023. Beginning in August 2023, information manipulation related to policy on imported eggs included stories such as “Ultra Source Limited [an egg importer] importing carcinogenic poisoned eggs,” “Expired and foul-smelling eggs imported by Ultra Source,” “DPP deceiving the public by changing the origin of imported liquid eggs to Taiwan,” and “DPP colluding with Ultra Source for personal gain.” In our investigation, we found that anonymous Taiwanese political commentary accounts used reports and comments from Taiwanese media and political figures to create numerous meme images and text for such stories. These were then shared by suspicious personal accounts across major online communities. However, accounts exhibiting coordinated behavior, spreading misinformation, or raising suspicions added meme images with specific hashtags like “#台農董座 #涂萬財 #巴西臭蛋 #農業部 #陳吉仲”(#TaiNongChairman #TuWanTsai #BrazilianStinkyEgg #MinistryOfAgriculture #ChenChiChung) and shared them in anti-DPP public groups. PRC state media played a role in setting the tone for the story on Weibo after the direction was established in Taiwan.
On Facebook, suspected anonymous overseas Fan Pages, such as 政事每天報 (Political Daily), 每日資訊速報 (Daily News), 熱點新聞報 (Hot News Report), 話仙 (Taiwan Gossip King), and 新聞一起看 (News Check34), used meme images to emphasize the idea that government imports would result in poisoned carcinogenic eggs. These Fan Pages attempted to increase the exposure of related content by using hashtags such as #進口蛋, #毒雞蛋進口 (#ImportedEggs #ToxicImportedEggs), and others. Through the use of inauthentic accounts, these posts were shared across major public groups. The tactic involved initiating a political narrative within Taiwan first. Taiwanese anonymous political commentary accounts and accounts exhibiting coordinated behavior raised the discussion level of the given topic. The PRC then seized the opportunity to use official media and social media accounts to increase public panic about food safety and establish and reinforce the impression that the DPP is favoring specific corporations.
Case 2: Country-of-Origin Labeling Controversy of Imported American Pork
Following the controversy over the country-of-origin labeling of imported liquid eggs, discussions on such labeling extended to imported pork. Starting from September 2023, stories related to pork country-of-origin labeling were as follows: “American pork with ractopamine, when processed in Taiwan, becomes Taiwanese pork, indicating its origin is false (washing away the origin),” “People have already consumed American pork with ractopamine with false labeling,” and “The government uses the Taiwanese pork label to allow American pork to be on the shelves.” PRC state media and commercial media such as 台海网 (TaiHaiNet), 海峡导报 (Strait Daobao), 今日海峽 (Strait News), and 华夏经纬网 (Huaxia Jingwei News) with the theme “DPP is initiating origin washing” reposted articles from Taiwanese media and released videos promoting stories such as “Taiwanese people have already consumed American pork” and “The Taiwanese pork label is a government means to confuse people and allow the circulation of American pork.” These stories combined the suspicion of America and created distrust of the ruling party among the public. Similar to the imported egg controversy, we found that anonymous Taiwanese political commentary accounts and accounts with coordinated behavior used meme images, specific phrases, and hashtags to echo the PRC’s negative views of the ruling party. Additionally, in the LINE rumor reports collected by MyGoPen, we found that messages that induced panic among the public by referring to past food safety concerns were concentrated in October. The collaboration between Taiwanese and PRC actors in disseminating posts on this topic reinforced and prolonged the panic among Taiwanese people about food safety and their distrust in the ruling party and the United States.
Case 3: Indigenous Defense Submarine (IDS)
Taiwan’s first domestically built submarine, the “Narwhal,” held its naming and launching ceremony on September 28, 2023. As more reports on submarines were released, the following narratives emerged: “Taiwan lacks the capability to manufacture submarines” and “Issues abound in the indigenous submarine program.” For the narrative surrounding Taiwan’s lack of capability to manufacture submarines, PRC state media and Weibo influencers, such as 台海网 (TaiHaiNet), 海峡导报社 (Strait Daobao), and UFO启示 (UFO Revelation), took excerpts from Taiwanese political talk shows and disseminated videos and posts with titles such as “Submarines cannot be produced; they can’t even make subway cars,” and “Unable to produce three submarines by 2025” to downplay Taiwan’s ability to develop military infrastructure domestically. In addition, articles titled “South Korean companies assisting Taiwan in manufacturing submarines face prosecution” and “The oldest submarine is in Taiwan” were published, casting doubt on Taiwan’s competence in creating domestic military assets.
For the narrative surrounding issues in the indigenous submarine program, PRC state media and Weibo accounts established hashtags such as “#台自造潜艇投标案被质疑#,” #台自造潜艇得标商成立48天拿到设计标案#,” and “#国台办回应台自造潜艇争议不断#” (“#TaiwanIndigenousSubmarineBidControversy#, #TaiwanIndigenousSubmarineBidderEstablishedIn48DaysReceivedDesignBid#, and #TaiwanAffairsOfficeRespondsto OngoingDisputeOverTaiwan’sIndigenousSubmarine#”) to share videos and articles related to accusations made by Kuomintang (KMT) legislator Ma Wen-chun during questioning in the Legislative Yuan.
Furthermore, accounts such as 海峡之声 (Voice of the Strait), 厦门广电 (Xiamen Media Group) on Weibo, and PRC state media used reports from Taiwanese media to allege that associates and relatives of President Tsai Ing-wen were involved in the indigenous submarine program, portraying the project as a government corruption scandal. Anonymous Taiwanese political commentary pages, such as 新聞總匯三明治 (News Digest Sandwich), went a step further by combining conspiracy theories about government–business collusion related to rapid testing kits, imported eggs, and the indigenous submarine program. They created meme images and posts with a set of hashtags such as “#民進黨 #賴清德 #陳建仁 #弊案疑雲 #監督執政 #打擊歪哥 #下架不良政客” (“#DPP #LaiChingTe #ChenChienJen #ScandalSuspicions #SuperviseTheGovernment #FightAgainstCorruptPoliticians”), which were then shared by coordinated accounts. Moreover, hashtags such as “#民進黨 #超思 #陳建仁 #賴清德” (“#DPP #UltraSource #ChenChienJen #LaiChingTe”) were shared in groups supporting the KMT and Taiwan People’s Party (TPP). Additionally, certain Taiwanese media guided discussions on the indigenous submarine program toward concerns about the submarine’s quality and allegations of government–business collusion. PRC state media and social media accounts then selectively amplified the story, portraying Taiwan’s military equipment manufacturing standards as subpar within China. Moreover, this orchestrated effort led Taiwanese media to follow suit, which repeated the same headlines. Finally, the same mode of propagation was evident, with anonymous Taiwanese political commentary pages creating meme images and sharing posts through coordinated accounts.
Case 4: Compulsory Military Service
On October 16, 2022, Taiwan’s Ministry of National Defense announced its annual budget, leading to political figures questioning the adequacy of the military’s resources and suggesting that compulsory military service would become Taiwan’s main combat force. Subsequent stories included “Lai Ching-te’s family is in the United States; he won’t go to the battlefield when war breaks out” and “Lai Ching-te deceives voters with rhetoric.” PRC state media on Facebook, Baidu, NetEase, TikTok, and other platforms mainly focused on the narrative that “the DPP is leading Taiwan to the battlefield.” They disseminated videos and article excerpts from Taiwanese media that emphasized a shortage in Taiwanese military personnel and alleged that the DPP, in collaboration with external forces, is pushing Taiwanese youth into battle. We also identified suspected anonymous overseas Fan Pages such as 清白評論 (Innocent Comment Circle), 橙子有話說 (Orange Has Something to Say), 新聞嘴 (News Mouth), and 新聞趣事 (News Anecdote), all of which shared memes or posts combining the aforementioned stories and narratives. These were then shared by suspicious individuals or inauthentic accounts into groups opposing the DPP and supporting the KMT. Additionally, some accounts used a set of relevant and high-impact hashtags like “#瞎搞 #兵役 #賴清德 #義務役 #不用上戰場 #國防部長 #打臉”(“#MessingAround #MilitaryService #LaiChingTe #CompulsoryMilitaryService #NotGoingToTheBattlefield #MinisterOfNationalDefense #Exposed”) in an attempt to share a large number of memes that depicted Vice President Lai Ching-te and Minister of National Defense Chiu Kuo-cheng issuing contradictory statements. These posts were disseminated more broadly to other major social media groups.
Case 5: Arrival of 100,000 Indian Migrant Workers
On September 26, 2023, the Hindustan Times reported that Taiwan and India would sign a memorandum to bring Indian migrant workers into Taiwan to address labor shortages in labor-intensive industries. By mid-November, Taiwanese media and online forums (namely PTT and Dcard) simplified the discourse of Bloomberg’s report and similar media coverage, which stated that “Taiwan could hire up as many as 100,000 Indians,” and distorted the story by claiming that “India is a country of sexual assault,” and “Taiwan is turning into a rape island.” This narrative could be seen as an attempt to cause panic among a portion of the Taiwanese population. The PRC state media followed Taiwanese media coverage, publishing articles and videos related to the title “Taiwan will import 100,000 Indian laborers, causing dissatisfaction on the island.” This may also have been an attempt to create a negative image of India and dissatisfaction among Taiwan’s population regarding this issue. We also found that coordinated inauthentic accounts related to China commented under Radio Free Asia’s posts on X. A group of inauthentic accounts, all created after July 2023, used pictures of young women as profile pictures, marked their location as Taiwan, and consistently expressed opinions such as “Importing Indian migrant workers has no benefits” and “Better off cooperating with the mainland.” However, upon closer examination of the posting history of these accounts, we found that all of them replied in Simplified Chinese characters on topics related to Taiwan. Pretending to be Taiwanese, these accounts directed the attitudes of the comments section toward a favorable opinion of the PRC and exacerbated conflict between Taiwanese and Indian societies.
This topic quickly transitioned from online discussions to offline mobilization and was pushed by PRC state media and inauthentic accounts. In less than a month, there was an attempt to mobilize members from Dcard and LINE communities, resulting in the “123 Don’t Come, India” protest against the arrival of Indian migrant workers on December 3. The actual number of participants was nearly 100, significantly fewer than the nearly 10,000 members across online communities. Nevertheless, the process from the initial media reports to rapid mobilization, which was fueled by PRC state media and inauthentic accounts, underscored the potential impact of this online discussion-to-mobilization pipeline in similar future scenarios.
Case 6: China Initiates Trade Barrier Investigation and Partially Halts Economic Cooperation Framework Agreement (ECFA) Tariff Reductions
On April 12, 2023, China’s Ministry of Commerce announced the beginning of a trade barrier investigation against Taiwan, and on December 15, the investigation findings were released. On the same day, Zhu Fenglian, a spokesperson for the Taiwan Affairs Office, stated, “The DPP unilaterally restricts the entry of mainland products, which does not conform to the normalization of cross-strait relations under the ECFA, damages relevant mainland interests, and also harms the interests of Taiwanese consumers.” On December 20, China’s Ministry of Commerce announced the suspension of tariff reductions on 12 raw chemical materials imported from Taiwan under the ECFA. On Weibo, PRC state media established two trending topics: “#台湾对大陆贸易限制构成贸易壁垒” (“#Taiwan’s Trade Restrictions on the Mainland Constitute a Trade Barrier”) and “#中止对台湾地区部分产品关税减让#” (“#Suspend Tariff Reductions on Certain Products in Taiwan Region#”). These topics reached the sixth and first places in the Weibo hot search, respectively. The main stories shared by PRC media and some Taiwanese media were, “The DPP claims the ECFA is sugar-coated poison but has not abolished it to this day; it displays double standards and dramatics” and “The DPP only blames the mainland, with no intention of improving cross-strait relations.” We also identified suspected anonymous overseas Fan Pages, such as “Face-slapping Lai Ching-te, Peeling off the DPP,” which made memes related to the narrative, “If Lai Ching-te is elected, China may suspend ECFA, and Taiwan’s economy will worsen.” These were accompanied by hashtags such as “#ECFA #賴清德” (“#ECFA #LaiChingTe”) and were then shared by inauthentic accounts in a coordinated manner. This approach established a narrative direction, portraying the DPP as unilaterally damaging cross-strait relations, and was followed by media and political figures from both China and Taiwan with similar stances, reporting stories about how the DPP unilaterally damages cross-strait relations. By collecting data from X using their API (Application Programming Interface), we found 450 inauthentic accounts operated in a coordinated manner, created between October 14 and 29, 2023, that shared the same news stories and emphasized: “If Lai Ching-te is elected, China may suspend the ECFA.” These accounts mainly disseminated content related to suspicions about the U.S. and tensions in cross-strait relations, and they demonstrated identical posting formats. To further spread the story, the content was published by suspected anonymous overseas Fan Pages, shared by coordinated inauthentic accounts, and disseminated using numerous inauthentic accounts, all of which attempted to influence the general public’s sense of economic panic.
Case 7: China Accuses DPP of Spreading Rumors and Pressuring the Band Mayday
On December 2, 2023, Chinese content creator 麦田农夫 (Cornfield Farmer) released a video on the Bilibili platform alleging that the Taiwanese band Mayday lip-synced during a concert. There were nearly 67 trending topics on Weibo related to Mayday in December as the event sparked intense discussions in China. According to Reuters, the Mayday lip-syncing controversy was related to Mayday’s refusal to comply with the request of China’s National Radio and Television Administration that the band publicly state that Taiwan is part of China. However, after the report was released, Chen Binhua, the spokesman of the Taiwan Affairs Office of China, responded by saying that the report was unfounded, stating, “The Democratic Progressive Party authorities are deliberately spreading rumors. This is sinister and malicious political manipulation.” Subsequently, Taiwanese media and KMT political figures, such as Cai Zhengyuan and Hou Hanting, as well as suspected anonymous overseas Fan Pages like “鑒天下” (“Insights into the World”) and “橙子有話說” (“Orange Has Something to Say”), echoed the statements of the Taiwan Affairs Office, claiming that the DPP was manufacturing internal propaganda and fake news and engaging in political manipulation.
In addition, we observed that on December 31, PRC state media posted a video of cross-strait singers singing “Tomorrow Will Be Better” on YouTube, TikTok, and other platforms, creating an image of cross-strait unity. The song was boosted by ZhiXing on Facebook, who continuously invested advertising funds in posts that promoted the song to increase engagement. On one hand, the PRC used the “Raise, Ensnare, Terminate” approach to compel Taiwanese artists to declare that Taiwan is a part of China in exchange for allowing them to perform and profit in the country; failing to comply can then lead to the artists being investigated or even blacklisted, causing them to be “ensnared” or “terminated.” On the other hand, by promoting songs featuring cross-strait artists, the posts aimed to attract fans of Taiwanese artists and cultivate positive attitudes toward China. Therefore, such tactics from the PRC constitute a form of cultural warfare.
Case 8: Lai Ching-te’s Alleged Violation of Building Codes in Wanli
Throughout 2023, there were several controversies surrounding violations of building codes by Taiwanese political figures. On September 5, 2023, a post on the PTT forum mentioned that DPP presidential candidate Lai Ching-te’s family home in Wanli, New Taipei City, was designated for mining use and could not be used for residential purposes. Stories related to this event included “Lai Ching-te’s family home is not only a building violation but also tax evasion,” and “Other people’s houses are violations, but Lai Ching-te’s family home is not; DPP has double standards.” PRC state media and Weibo accounts immediately reposted Taiwanese media reports about “Lai Ching-te’s family home being a violation” with an article and video the next day. On Facebook, there were suspected anonymous overseas Fan Pages such as “反爆股長” (“Anti-stock Boom”), which posted memes with captions such as “台獨金孫” (“Chief Heir of Taiwan independence”) and “違建金孫” (“Chief Heir of violating building codes”). These were then shared extensively by highly suspicious coordinated accounts in groups supporting the KMT and opposing the DPP. Another group of accounts used hashtags related to Lai Ching-te’s ancestral home, such as “#賴清德 #違建 #假清高 #打臉 #門牌” (“#LaiChingTe #Violation #FakeHighMoralGround #Expose #HouseNumber”), along with memes, which were then shared in groups supporting the TPP and the KMT. Another group of anonymous overseas Fan Pages posted screenshots from inauthentic accounts from other platforms and shared them through inauthentic accounts to major groups. For a detailed explanation of this technique, please refer to “Imitating Local Sentiments: Analysis of Foreign Facebook Fan Pages’ Intervention in Taiwanese Elections.”
Case 9: Sex Scandals Involving DPP Political Figures
There were several sex scandals involving DPP political figures throughout 2023, with some incidents still unconfirmed. The individuals involved included legislator Chao Tien-lin, Deputy Minister of the Mainland Affairs Council Liang Wen-chieh, Deputy Premier Cheng Wen-tsan, and legislator Lo Chih-cheng. Stories related to these events included “Opposing and resisting China is work; being pro-China and copying China is life,” and “DPP senior officials have extramarital affairs and go to hotels.” The main narrative was that “the peach blossom culture(1) has become a part of the mainstream culture of the Democratic Progressive Party.” The incident involving Chao Tien-lin was the first of its kind observed in this analysis. After Taiwan’s media were tipped off, anonymous overseas Fan Pages such as “C咖出道” (“C Celebrity Debut”) and coordinated pages with admins from Cambodia immediately turned the narrative into memes and videos, adding hashtags and posting on Facebook. Highly suspicious or inauthentic accounts then shared these posts on their personal profiles or to major groups. The “C咖出道” (“C Celebrity Debut”) account produced a video featuring a virtual host using Memoji and audio processing using a voice modifier. The host talked about peach blossom culture as a part of the mainstream culture of the Democratic Progressive Party. The comments on the video featured a group of coordinated inauthentic accounts posting unrelated comments in English and liking the video to increase interaction. Additionally, a surveillance video allegedly showing “Cheng Wen-tsan bringing a young woman into a hotel room” was released, which followed closely after the Chao Tien-lin incident. The video was first released by the anonymous Taiwanese political commentary Fan Page “Next逆襲” (“Next Counterattack”), supporting the KMT. Subsequently, PRC state media, Weibo influencers, Taiwanese media, political figures, and anonymous accounts from various social media platforms contributed to the dissemination of related videos and content. Information manipulation on the sex scandals involving DPP political figures included the appearance of hashtags such as “#性lie台灣” (“#SexLieTaiwan”) and “#性賴台灣” (“#SexLaiTaiwan”), replacing the negative terms “peach blossom” and “deception” for the DPP’s election slogan “Trust Taiwan.”
Case 10: Lai Ching-te’s Alleged Illegitimate Child
Stories about Lai Ching-te’s alleged illegitimate child date back to 2015 when it was first reported. In October 2023, Chiu Yi mentioned the alleged illegitimate child again during a live broadcast with Taiwanese influencer Bit King. Subsequently, in November and December, some overseas Fan Pages focused on the issue. However, the most extensive operation began around January 9, 2024, continuing until the day of the election. On January 3, Hong Kong’s Wen Wei Po quoted Chiu Yi’s remarks, mentioning Lai Ching-te’s mistress and illegitimate child.
On January 12th, X saw a staggering 7,537 related posts in a single day that originated from 1,638 different accounts. Among these accounts, 73% were created in November 2023, suggesting they were likely established for activities during the election period. Of the over 7,500 posts, 94% consisted of identical content that was copied and pasted. These repeated phrases were posted at least eight times or more, indicating a clear and extensive coordinated information manipulation.
Simultaneously, PRC state media reported sporadically on Facebook, news platforms, and Weibo. Hong Kong media, Fan Pages, and suspected anonymous overseas Fan Pages also shared the story. Some Facebook posts appeared to have purchased likes and shares from Arabic and Southeast Asian accounts. On Facebook and YouTube, there were instances of inauthentic accounts spreading content in Simplified Chinese. Some of these accounts had previously shared PRC political propaganda aimed at whitewashing Hong Kong.
Additionally, the story gained coverage from pro-PRC news media and low-profile news websites in various countries, including Malaysia, the Philippines, India, and Canada. On LINE, reports on Lai Ching-te’s alleged illegitimate child began on January 5th and peaked on January 10th and 11th, just before the election. This wave of operations against the DPP presidential candidate followed those of sex scandals involving DPP politicians. The narrative suggested that Lai Ching-te, with multiple mistresses and an alleged illegitimate child (Lai Ting-han), possesses significant moral flaws, which the narrative attempted to exploit to influence the election outcome. Despite Lai’s statement on the eve of the election, the dissemination of the story did not abate.
Case 11: Secret History of Tsai Ing-wen
In the lead-up to the January 2024 elections, information about the “Secret History of Tsai Ing-wen” circulated on major social media platforms, video streaming platforms, and forums. This extensive “book,” exceeding 100,000 words, first appeared around December 28, 2023, and was uploaded as a Word document on ufile.io. Subsequent manipulations were sourced from Word and PDF files uploaded to zenodo.org on January 2, 2024. As of the time of writing, the file has been downloaded over 27,000 times.
The anonymous author of the “Secret History of Tsai Ing-wen” goes by the pseudonym “Taiwanese Writer” Lin Leshu. However, the content reveals numerous Chinese terms and errors resulting from the conversion between simplified and traditional characters (e.g., “髮現” )(2). Based on the image names and insertion paths in the .docx file, it is evident that the source image files are in Simplified Chinese. The likelihood of the content being written by a Taiwanese individual is very low.
Throughout this operation, coordinated efforts by inauthentic accounts on Facebook, X, YouTube, and TikTok were observed. Many prominent individuals and influencers reported receiving numerous emails and messages inundated with information about the “Secret History of Tsai Ing-wen.” Additionally, the content was disseminated through lesser-known foreign news media, likely posted by inauthentic accounts on various online forums and communities.
Notably absent were the typical PRC propaganda channels, including PRC state media, Hong Kong media, Weibo influencers, and Facebook pages suspected to be anonymous overseas Fan Pages. The spread of disinformation encompassed various formats, including text and images, mainly appearing on Facebook, X, emails and messages, Wikipedia, low-profile news websites, and major online discussion communities. Brief descriptions claimed that the book unveiled a distorted, sinister, and unknown side of Tsai Ing-wen and provided a link to the original document on Zenodo. Regarding audio-visual content, YouTube and TikTok were the primary platforms, featuring over twenty different videos of AI-generated virtual anchors from Capcut and voices reading different chapters of the “Secret History of Tsai Ing-wen.” The inauthentic accounts involved in spreading this information also exhibited international characteristics, including the use of Simplified Chinese, errors in converting Simplified characters to Traditional ones, and a history of previously spreading anti-American propaganda for Guo Wengui and Li-Meng Yan but having transitioned to disseminating the “Secret History.”
The scope of the operation was extensive, with X alone having over 2,000 pieces of related content. This substantial investment of resources suggests a considerable operation to create a negative impression of Tsai Ing-wen and possibly influence the election outcome.
Case 12: Election Fraud
Around the national elections on January 13, 2024, information concerning vote fraud circulated, casting doubt on the impartiality of the Central Election Commission (CEC) and attempting to shake the confidence of Taiwanese voters in the electoral process. Prior to election day, rumors spread on platforms like X, TikTok, and Facebook alleging that the 8.17 million votes received by Tsai Ing-wen in the 2020 presidential election were fraudulent. Additionally, a Facebook page with ties to China, 兩岸頭條 (Cross-Strait Headlines), shared content from Taiwan’s CTi News, claiming that the new ballot box design facilitated ballot rigging and urging people to monitor polling stations to prevent it. This Facebook page was previously revealed in an investigation by the Ministry of Justice Investigation Bureau to have received funding from Weishi, a Mainland China-based company, for conducting cognitive warfare against Taiwan. Furthermore, on TikTok, influencers who typically shared fashion or beauty content collaborated to post videos with similar scripts and materials before the polling day, alleging that the 2020 presidential election was wrought with voting fraud and predicting a recurrence in 2024 (see Figure 15). After the election day, PRC state media outlets such as 华夏经纬网 (Huaxia Jingwei News), 海峡新干线 (Strait Shinkansen), Straits New Main Line, and 通天下 (Tongtianxia) continued to produce and share posts and videos on TikTok questioning the DPP’s alleged ballot rigging.
Summary of Cases
Traces of Chinese or suspected Chinese actors’ involvement were observed in various cases investigated during the election observation period. In the “2024 Taiwan Elections: Foreign Influence Observation Preliminary Statement,” we categorized the themes of the narratives that the PRC either initiated or amplified as follows: livelihood, national defense, public security and diplomacy, economy, culture, and controversial events involving political figures. In this report, we delved into the core narratives that attackers aim to disseminate. Next, we will explain how each theme morphs into a narrative. For example, behind the theme of livelihood lies the notion that people’s suffering results from ineffective governance. The theme of national defense encompasses Taiwan’s self-defense capabilities and the potential for conflict in the Taiwan Strait, aiming to instill the fear of war among the Taiwanese people. We amalgamated the themes of law and order with diplomacy, economy, and culture into the new theme of cross-strait and diplomatic relations. The PRC’s objective in this theme is to leverage cross-strait cooperation to draw Taiwan closer as Taiwan’s collaboration with the international community faces obstacles. Controversies involving political figures entail attacks on political figures and their respective parties. As the incidents of election fraud occurred in close proximity to the election day, they were still under investigation at the time of drafting the preliminary analysis report and thus were not included. This report categorizes them as issues related to democratic processes. Therefore, the issues outlined in this article include government inefficiency, national defense, and the Taiwan Strait war, cross-strait and diplomatic relations, controversies involving political parties and figures, and issues related to democratic processes. Consequently, the relationship between the cases and their respective issues is detailed as follows:
- Government ineffectiveness: Imported Egg Controversy, Country-of-Origin Labeling Controversy of Imported American Pork
- National Defense and Taiwan Strait War: Indigenous Defense Submarine (IDS), Compulsory Military Service
- Cross-Strait and Diplomatic Relations: Arrival of 100,000 Indian Migrant Workers, China Initiates Trade Barrier Investigation and Partially Halts the Economic Cooperation Framework Agreement (ECFA) Tariff Reductions, China Accuses DPP of Spreading Rumors and Pressuring the Band Mayday
- Controversial Events Involving Political Parties and Figures: Lai Ching-te’s Alleged Violation of Building Codes in Wanli, Sex Scandals of DPP Political Figures, Lai Ching-te’s Alleged Illegitimate Child, Secret History of Tsai Ing-wen
- Issues Related to Democratic Processes: Election Fraud
The cases under the topic of “Government ineffectiveness” aimed to criticize and distort the policies and outcomes of the ruling Democratic Progressive Party (DPP), fostering distrust among the Taiwanese public. The narrative primarily revolved around the DPP’s perceived incompetence in governance and its alleged deception of the populace. For instance, the narrative highlighted claims that the eggs imported by the ruling party were rotten and poisonous, and that the ruling party deliberately misled the public regarding the origins of both eggs and pork.
The cases under the topic of “National Defense and Taiwan Strait War” were primarily aimed at convincing people to abandon the idea of defending Taiwan by force and to stop resisting the PRC. The narratives mainly revolved around portraying Taiwan as incapable of defending itself against PRC attacks and suggesting that the DPP is leading Taiwan into war. This message was spread by highlighting Taiwan’s alleged inability to independently manufacture submarines and the notion that volunteers would be required to go to the front lines of the war.
The cases under the topic of “Cross-Strait and Diplomatic Relations” primarily sought to sow discord in the cooperation between Taiwan and other countries, particularly focusing on Taiwan’s relations with China. The narratives often emphasized the inseparability of Taiwan and China in terms of blood and culture, portrayed the DPP as unable to manage a mutually beneficial relationship between Taiwan and China, and asserted that India is a major perpetrator of sexual abuse. These themes manifested in events such as joint New Year’s Eve performances by singers from China and Taiwan, threats of economic sanctions against the DPP if reelected, and allegations of widespread sexual assault by Indian migrant workers in Taiwan.
The cases categorized under “Controversial Events of Political Parties and Figures” aimed primarily at disparaging and vilifying the Democratic Progressive Party (DPP) and its political figures to instill doubt in the minds of voters regarding their character and competence. The narratives often depicted the DPP as morally compromised, citing instances of numerous DPP politicians involved in scandals, Lai Ching-te’s alleged illegal construction in his hometown, allegations of illegitimate children and multiple mistresses, and claims about Tsai Ing-wen’s undisclosed affairs and dealings.
For cases categorized under the topic of “Democratic Process Issues,” the main objective was to instill doubt among Taiwanese citizens regarding the fairness of the electoral process and its outcomes, and to potentially spark dissatisfaction and offline actions in the event of unexpected results. The narrative suggested that the DPP would secure victory through fraudulent vote-counting and insinuated that the ruling DPP has the capability and intention to manipulate election results, which was supported by various conspiracy theories aimed at undermining the perceived impartiality of the election results.
Threat Intelligence Analysis
Based on the findings from the case studies, we shifted our analytical focus towards the behavior of attackers, conducting a quantitative analysis to delve into the typical behavioral patterns of the PRC’s information manipulation targeting Taiwan (see Figure 16).
To convert the cases into measurable data, it was necessary to code the survey results to convert them into data that conforms to the FIMI, TTPs, DISARM, and STIX methods. When applying this set of analytical methods, the data hierarchy of information manipulation can be conceptualized into three tiers, corresponding to “point, line, and surface”: observable, incident, and case.
Observable refers to understanding the concrete elements of an incident, such as a post, a news article, or a video. An incident involves actions taken by one or more threat actors to achieve a specific goal, often for deceptive purposes. It comprises a combination of observables and Tactics, Techniques, and Procedures (TTPs). A case serves as a unit of investigation, where information with closely related subject matter is grouped together. Each case may contain one or more Information Manipulation incidents.
For instance, the case of “Lai Ching-te’s Alleged Violation of Building Codes in Wanli” can be subdivided into three distinct incidents based on the timing and theme of the information manipulation: the incident concerning Lai Ching-te’s home in September, his emotional discussion of the illegal construction in November, and the Lai Pei-Liao incident in December. Each incident represents an independent instance of information manipulation, which may be temporally distinct from others or differ slightly in content.
Following the classification of the 12 significant cases observed in this election, there were a total of 24 distinct foreign information manipulation incidents, encompassing 689 observables (see Figure 17).
Issues Evolve over Time
Among the five issues summarized in the case studies, the issue of controversial events involving political parties and figures demonstrated the highest number of incidents with a total of 9. This was in line with the election context, where information is manipulated to influence the election results or create conflict among citizens. The issue of cross-strait and foreign relations had a total of 6 incidents, while the topic of defense and the Taiwan Strait War had 4 incidents. Similarly, there were 4 incidents under the topic of ineffective government administration. There was only one incident regarding democratic procedures.
The statistics from this survey indicated that before November, the primary issues were ineffective government administration, national defense, and a Taiwan Strait war. However, after November, there was a shift towards more operations focused on cross-strait and foreign relations. From December until the eve of the election, there was a shift away from policy-oriented operations towards direct attacks on political parties or controversial political figures. In the week leading up to and following the election day, questions about the democratic process emerged, which aimed to undermine confidence in the election outcome (see Figure 18).
The objective of the Attacker
The actors behind information manipulation usually conduct them with certain objectives in mind. By examining the data and techniques used in an incident, it’s possible to infer what the attacker may be trying to accomplish, which can be referred to as a presumed objective. This investigation utilized seven common objectives mentioned in the DISARM framework for analysis, namely Dismiss, Distort, Distract, Dismay, Divide, Facilitate State Propaganda, and Degrade Adversary.
- Dismiss: Resisting criticism, denying accusations, and discrediting sources.
- Distort: Altering perceptions and changing the narrative.
- Distract: Shifting focus or transferring responsibility.
- Dismay: Threatening and scaring off opponents.
- Divide: Instigating conflict and widening rifts within or between groups.
- Facilitate State Propaganda: Mobilizing support for state propaganda and coordinating promotional efforts.
- Degrade Adversary: Undermining an adversary’s reputation or actions through derogatory information.
The most frequent presumed objectives that appeared in the incidents of this election were distorted (39.47%) and degraded adversary (36.84%), as depicted in Figure 19.
By clustering issues, we could explore the relationship between issues and presumed objectives to analyze the PRC’s desired objectives when manipulating specific topics.
The primary distortion focused on the arrival of 100,000 Indian migrant workers as it relates to cross-strait and diplomatic relations (see Figure 20). This narrative simplified the statement that “as many as 100,000 Indians” could be hired, as reported by media outlets like Bloomberg, into “100,000 Indian migrant workers are coming to Taiwan.” This reinforced the narratives that “India is a country of sexual assault,” and “Taiwan is turning into a rape island.” to instigate panic among the Taiwanese people. Another notable attack was directed at the democratic process, where the principle of secret ballots and flaws in the electoral procedures were twisted into a conspiracy theory of election fraud by the ruling party. This aimed to undermine Taiwanese confidence in the fairness of the elections and foster distrust in democracy.
The primary focus of the derogatory remarks against adversaries centered on the government’s ineffective governance and controversies involving political parties and figures. The ruling DPP and its political figures have adopted a firm stance against the PRC, making them the primary targets of PRC attacks. A notable instance occurred in mid-November 2023 when the Taiwan Affairs Office of the People’s Republic of China accused Lai Ching-te and Hsiao Bi-khim, the DPP’s presidential and vice-presidential candidate, respectively, of advocating for a “double-independence combination” and being “more independent than independent.” Additionally, attacks were directed at disputes over imported eggs and the country-of-origin labeling of U.S. pork to portray the DPP as corrupt, evil, or incompetent. Furthermore, personal attacks were also made aimed at the moral degradation of DPP politicians.
Target of Attack
Each incident in the survey recorded the target of the attack in the information manipulation. Analyzing this statistical data helped us understand the primary targets of PRC attacks in these operations. For instance, in the dispute over imported eggs, targets included Chen Jizhong, the Ministry of Agriculture, the Food and Drug Administration (FDA), and the DPP.
Among the 24 incidents surveyed, 20 different targets were identified, ranging from countries to organizations and individuals. Given the focus on the 2024 Taiwan election, Taiwan emerged as the most frequently attacked country. Regarding targeted organizations, the DPP was the predominant target, followed by various government departments in Taiwan. Lastly, in terms of individuals, the DPP’s presidential candidate, Lai Ching-te, was the most frequently attacked (see Figure 21).
Information Manipulation Behavior Statistics
The DISARM Foundation has developed a framework, known as the DISARM framework, which builds upon the concept of TTPs to describe the common behaviors observed in information manipulation incidents. By statistically applying these techniques, threat actors can be identified and analyzed, aiding analysts in understanding their targets and devising effective response strategies.
Common Techniques
Across the 24 incidents analyzed, a total of 127 different techniques were observed. These techniques often involved the manipulation and amplification of existing narratives and conspiracy theories rather than the creation of entirely new ones. Information manipulation and amplification techniques rely heavily on the widespread sharing or reposting of existing narratives by the public or through inauthentic accounts.
To effectively manipulate information and maximize its impact, techniques often involved identifying existing echo chambers (such as specific Facebook Groups with particular political leanings) and disseminating information within these chambers. Regarding the channels for disseminating information, mainstream social media platforms (such as Facebook and X) were frequently utilized for online dissemination, rather than formal diplomatic channels or traditional media outlets.
The most common types of content disseminated included hashtags and meme images, strategically crafted to achieve specific objectives (see Figures 22, 23, 24).
It’s notable that in this investigation, the predominant method of information manipulation was video content. This contrasted with the relatively fewer instances of textual or image-based content manipulation observed in the incidents. Specifically, both graphic and textual content was observed 6 times. By contrast, video content was observed 11 times with 13 instances, underscoring its significance. Additionally, we identified the use of AI-generated videos in some instances. It’s important to clarify that these numbers reflect the frequency of these techniques within the scope of this investigation and do not represent the overall prevalence of various types of information manipulation on the Internet.
Common Tactics
Each technique can be categorized under a specific “tactic,” and by analyzing the tactics outlined in the TTPs, we could gain insight into the primary direction of the attacker’s information manipulation. For instance, “Technique T0087: Developing video content” falls under “Tactic TA06: Developing content.” Among the 24 incidents surveyed, the most common tactics include:
Planning Stage:
- TA02: Plan Objectives
- TA13: Target Audience Analysis
Preparation Stage:
- TA05: Microtarget
- TA06: Develop Content
- TA07: Select Channels and Affordances
- TA14: Develop Narratives
- TA15: Establish Assets
Implementation Stage:
- TA09: Deliver Content
- TA17: Maximize Exposure
In the planning stage, attackers define the objectives of their information manipulation and conduct a thorough audience analysis. For instance, in cases like the national submarine corruption and the Ultra Source (超思) procurement controversies, the aim may have been to discredit the government as corrupt, targeting Taiwanese citizens with specific political affiliations or those who distrust the government (see Figure 26).
During the preparation phase, attackers strategically choose channels and media to disseminate information and cultivate community support, including the creation of inauthentic accounts or manipulation of news sites. They develop various forms of content — text, images, or videos — to shape existing narratives or create new ones. Notably, over half of the incidents from the PRC’s operations showed a deliberate use of precision targeting to maximize the impact of their messaging. For example, attackers obtained access to a network of inauthentic accounts to propagate information about illegal construction in Lai’s hometown within specific online communities.
In the execution phase, attackers employ several tactics to amplify narratives and increase exposure to their content, such as posting across multiple platforms and leveraging inauthentic accounts for mass dissemination. A notable observation was that online discussions could facilitate offline activism, as seen in the “arrival of 100,000 Indian migrant workers” case, where online discourse led to offline protests.
Change of Techniques
A comparison between Doublethink Lab’s 2022 and 2024 election observations revealed the continued use of certain tactics. For instance, shared tactics included amplifying existing conspiracy theories or narratives, operating on mainstream social media platforms with inauthentic accounts, and targeting echo chambers to spread information. Additionally, some tactics have evolved since 2022, and others were less prevalent.
First, content generated by artificial intelligence (AI) was widespread. In this survey, many videos featured virtual anchors from CapCut, a video editing product under ByteDance, to read scripts. Instances (see Figure 27) of using Deepfake technology to create video content were also observed. Moreover, while some videos did not feature virtual anchors, they often used synthesized voiceover, which was distinctly recognizable as AI-generated.
Second, with the advancement of AI technology, the production cost and time of videos have decreased, leading to changes in how content creation techniques are selected. While past instances of information manipulation primarily relied on textual content and meme images, this survey revealed that producing video content has become a mainstream technique and one of the most popular types of content among attackers.
Information Channel
Information is disseminated through various channels such as social media platforms, online news websites, and video-sharing platforms. This project has documented and classified 496 distinct dissemination channels. Among these, social media platforms (such as Facebook) accounted for 78% of all observed data. The following closely comprised websites, comprising 10% of the total, while video-sharing platforms (including TikTok, Douyin, and YouTube) made up 7% (see Figure 28).
Furthermore, 23% of all observed data were directly controlled by PRC officials, including Chinese Communist Party and PRC organizations, official media of the Chinese central government, local media, and Hong Kong media. PRC officials played a significant role in the information manipulation process by disseminating controversial messages.
However, it’s important to note that the proportion of platform use that we observed does not necessarily reflect that of the entire information landscape. These proportions were solely those for the channels observed in our research.
Information Manipulation Behavior and Societal Attitudes in Taiwan
During the election observation, the project conducted an investigation into foreign information manipulation to understand the techniques employed by the PRC against Taiwan. As part of the project, a nationwide questionnaire was administered 3 to 12 days prior to election day to gauge the societal attitudes in Taiwan towards certain narratives and conspiracy theories related to information manipulation (see Figure 29).
The questionnaire covered essential voter information such as political party affiliations, voting preferences in the presidential election, and media consumption habits. It asked respondents whether they agreed with common disinformation narratives and conspiracy theories. Furthermore, by using sampling and weighting, we could extrapolate the survey data to paint a picture of the social landscape in Taiwan. For a detailed analysis of the questionnaire results, please refer to “2024 Taiwan Election: The Increasing Polarization of Taiwanese Politics — Reinforcement of Conspiracy Narratives and Cognitive Biases.”
First, the questionnaire employed an open-ended question: “Has any issue over the past year changed your original party preference or voting intention for the 2024 presidential election?” This question gauged which issues influenced respondents’ voting attitudes. Many of the issues were ones that the PRC’s information manipulation has used or amplified, particularly vaccine-related conspiracy theories and narratives. Interestingly, these were the most frequently mentioned issues by supporters of the KMT and the TPP. During this year’s election observation, the PRC was considerably involved in manipulating issues such as corruption, war, egg imports, national defense, and military service. Respondents also mentioned issues related to cross-strait relations. KMT supporters cited corruption, war, egg imports, national defense, and military service. DPP supporters focused on national defense and corruption, while TPP supporters mentioned egg imports and cross-strait relations. This outcome underscores that through in-depth research into sensitive political, economic, and cultural issues in Taiwanese society, the PRC can effectively disseminate controversial messages on topics that matter to the Taiwanese people (see Figure 30).
Second, the questionnaire asked respondents whether they agreed with some common narratives and conspiracy theories spread through information manipulation, such as through the following questions:
- Some people say, “The Taiwanese government benefits specific companies, providing citizens with subpar vaccines and contaminated eggs.” Do you agree or disagree with this statement?
- Some people say, “The current ruling party/Democratic Progressive Party (DPP) has a severe problem with political pork barrel and corruption.” Do you agree or disagree with this statement?
- Some people say, “The ruling party’s defense policy is to use young people as sacrificial lambs.” Do you agree or disagree with this statement?
- Some people say, “In terms of strengthening the resolve of the population to resist enemy aggression, the current Taiwanese government lacks action and preparations in defense.” Do you agree or disagree with this statement?
According to the analysis of the questionnaire results, both KMT and TPP supporters tended to agree with the proposed narratives and conspiracy theories. Conversely, supporters of the DPP tended to disagree with the narratives and conspiracy theories (see Figure 31). Additionally, based on these findings and those from Doublethink Lab’s ongoing monitoring, it’s evident that the PRC continues to target KMT supporters as the primary audience for its information manipulation. These operations often involve spreading manipulated information within the KMT’s social circles. Furthermore, such information was circulated within the Facebook communities of TPP supporters, whose party was the third largest in the election.
We found that the results of the questionnaire analysis and those of the case study were correlated. The results of the case studies indicated that KMT and TPP supporters were the targeted audiences for the PRC’s information manipulation, while the questionnaire results suggested that these supporters tended to agree with the narratives and conspiracy theories associated with the PRC’s operations. Although the exact causal relationship remains to be clarified, the correlation suggests that those targeted by the PRC’s information manipulation are more likely to align with the narratives and conspiracy theories amplified by the PRC’s operations during the elections.
Third, the questionnaire surveyed respondents about how they consume political information through two questions: “In the past year, from which channels have you most frequently obtained information about political or public issues?” and “The political information you receive from [the previous question], for the most part, comes from whose perspective?” According to the questionnaire analysis, individuals who frequently obtain political information from online audio-visual content creators, social media influencers, or Fan Pages tended to agree with the narratives and conspiracy theories. Concurrently, as indicated by the previous statistical analysis, social media platforms and video-sharing platforms collectively accounted for hosting 85.99% of the observed data, underscoring their importance as dissemination channels in the PRC’s information manipulation (see Figure 32). Consequently, the case study underscores that social media platforms and video-sharing platforms serve as the primary channels for the PRC’s information manipulation efforts. Conversely, the questionnaire findings suggest that individuals who rely on online video content, social media influencers, or Fan Pages for political information tended to align with the narratives and conspiracy theories propagated by the PRC’s manipulation tactics in this election. While this does not imply a direct causal relationship between the two, it does suggest that the PRC strategically utilized the most effective channels to propagate narratives and conspiracy theories, and individuals who frequently utilize these channels for political information consumption were more inclined to agree with the narratives and conspiracy theories amplified by the PRC’s manipulation during the election.
In summary, the findings from the questionnaire survey indicate that the specific issues amplified by the PRC’s operations aligned with the concerns of voters. Particularly, the issues considered important by supporters of certain political parties closely matched those manipulated by the PRC. When delving deeper into the political demographics of those who tended to agree with the narratives and conspiracy theories, we found that both party affiliation and information consumption habits were correlated with agreement with manipulated content. The case study results further highlight that such party preferences and information consumption habits aligned with the targeted audiences and channels of information dissemination in the PRC’s operations. While there’s no direct evidence establishing a causal relationship between the PRC’s information manipulation and its impact, there is a discernible correlation between these operations and the attitudes of Taiwanese citizens.
The divergent perspectives among various groups in Taiwanese society, which are affected by political stances and information consumption habits, resemble a multiverse. This phenomenon may not be ephemeral. Although operations related to vaccines primarily occurred in 2020 and 2021, concerns about vaccines remained the most salient concern for supporters of specific political parties. The enduring impact of the PRC’s sustained investment of resources in and amplification of particular information to polarize Taiwanese society may not be definitively understood through short-term studies of one or two elections. Rather, comprehensive, longitudinal studies are necessary to elucidate the effects of such manipulative actions on society.
Conclusion
In response to the challenge posed by information manipulation and election interference from authoritarian states, Doublethink Lab launched the “2024 Taiwan Election: Foreign Influence Observation Project” during the 2024 Taiwan Presidential Election. Collaborating with other civil society partners, we aimed to address these challenges by pooling and sharing information resources.
Throughout the observation period, we diligently monitored daily information updates through human observation and recording, accumulating a total of 10,629 pieces of suspicious information. Using these findings as a basis, we directed our investigative efforts toward the 12 most significant cases of foreign information manipulation. These investigations encompassed five key issues: ineffective governance, national defense and Taiwan Strait tensions, cross-strait and external relations, controversies surrounding political parties and figures, and concerns regarding democratic procedures.
The data from our investigation was then used to create structured data sets of foreign information manipulation by using concepts and frameworks similar to those utilized by the European External Action Service (EEAS). This facilitated the consolidation and systematic archiving of the investigation results. The dataset comprised 24 distinct incidents of information manipulation with 689 observables. By using the DISARM framework, we conducted an analysis of the behaviors exhibited in these 24 incidents and identified a total of 127 different techniques. Information was disseminated through 496 channels. Notably, 23% of these channels were under the control of PRC authorities, which encompassed CCP and government organizations, official media of the Chinese central government, local media, and Hong Kong media.
Observational statistics indicated a shift in the main issues targeted by information manipulation over time. Before November, the focus was primarily on the government’s ineffective governance and the potential for conflict in the Taiwan Strait. However, after November, there was a transition to more operations concerning cross-strait and diplomatic relations. From December onwards, there was a departure from policy-related operations, with a shift towards direct attacks on political parties and figures. In the period leading up to the election, particularly the week before election day, narratives doubting the democratic process began to appear, which served to undermine confidence in the election results.
The analysis revealed that the PRC’s information manipulation frequently aimed to distort narratives and degrade adversaries, particularly those who support Taiwan’s sovereignty. By distorting narratives, the PRC attempted to instigate fear among the public regarding law and order while fostering distrust in the democratic system. Additionally, political adversaries were attacked to undermine the effectiveness of the ruling party and discredit DPP politicians, which aimed to shape public perception and influence voting behavior. According to the statistics of targets attacked by information manipulation, Taiwan emerged as the primary target, with the DPP being the most frequently attacked organization. Government departments in Taiwan were also targeted. Notably, Lai Ching-te, the presidential candidate of the DPP, was the most frequently attacked political figure. In summary, the PRC’s information manipulation focused on distorting narratives and degrading political adversaries to influence public opinion and voting behavior, particularly targeting those who advocate for Taiwan’s sovereignty.
In the preparation phase, several tactics were employed to effectively disseminate harmful messages to the target audience. This involved carefully selecting channels and media for delivering messages and establishing a supportive online community, which may have been accomplished by creating inauthentic accounts or manipulating stories from news websites. These harmful messages were disseminated through text, images, or videos and were often accompanied by new or existing narratives. During the execution phase, attackers amplified these narratives and messages. Various techniques were employed to increase the visibility and impact of manipulated information, extending its reach to a wider audience. Additionally, online discussions may have been manipulated to incite offline protests or other forms of social unrest, further advancing the objectives of the operation.
The evolution of AI technology has significantly impacted the landscape of information manipulation, particularly in the realm of content production. The reduced cost and time associated with video production have made it a more accessible and attractive option for those seeking to manipulate information. While text and meme-based content have traditionally been prevalent in information manipulation, the increase in video content as a primary technique has signaled a shift in strategy. AI’s ability to assist or directly generate video content has opened up new avenues for attackers to manipulate information. This trend raises important questions about the future trajectory of information manipulation and the potential implications for online discourse and societal perceptions. Understanding how AI-generated content is used in information manipulation and at what stages of the process it is employed will be crucial for addressing emerging challenges in this domain.
Combining the OSINT case investigation on foreign information manipulation with the nationwide telephone survey provided a comprehensive approach to understanding the actions and effects of the PRC’s information manipulation. The OSINT investigation was conducted to analyze the tactics and strategies employed by the PRC in its information manipulation and provide insights into their intentions and methods. The data collected from the national telephone survey provided a broader perspective on the impact of the PRC’s information manipulation on Taiwanese society and was used to directly assess public sentiment and perceptions. By analyzing and cross-referencing the results of both surveys, the researchers could gain a more nuanced understanding of how PRC information manipulation attempts to influence public discourse, attitudes, and behaviors in Taiwan. This integrated approach enabled a holistic assessment of information manipulation and its implications for democratic processes and societal stability.
The survey analysis revealed that KMT supporters prioritized certain issues such as corruption, war, eggs, national defense, and military service when casting their vote. These issues aligned with those amplified by the PRC’s election operations, as identified in the case studies. We found that KMT supporters may have been influenced by information on these issues circulated by the PRC’s operations. In terms of opinions on these issues, TPP supporters are closer to neutral, but still have the same tendencies as KMT supporters. These audiences may be inclined to agree with the narratives or conspiracy theories propagated by the PRC, which could be a consequence of the PRC’s efforts to amplify such issues.
The comparison of the analysis results yielded two notable correlations: (1) the viewpoints held by the target audience of the PRC’s operation tended to align with the narratives and conspiracy theories amplified by the PRC during the election season, and (2) the PRC selected effective channels to spread narratives and conspiracy theories. Those who frequently used these channels to obtain political information also tended to agree with the narratives and conspiracy theories amplified by the PRC. Therefore, significant associations between information manipulation and the attitudes of Taiwanese voters were identified. However, the causal relationship between the two requires further research.
Recommendations for countering foreign information manipulation during election periods
To counter foreign information manipulation during elections,
taking comprehensive and specific measures to ensure that all stakeholders can effectively participate in and contribute to defense is crucial. We have provided detailed recommendations for various stakeholders to take action.
Legislation
- Regulating information manipulation: Platforms need to establish more specific measures for identifying and managing suspicious accounts with coordinated behavior observed during elections. The legislature could pass legislation requiring platforms to set up a traceable mechanism for removing such accounts and disclose revenue flow when companies profit from inauthentic accounts and inaccurate comments they post. This would enable the government to more effectively monitor the behavior of platforms and companies, establish traceability of inauthentic behavior, and establish rules of compliance for companies.
- Revealing the revenue flow: For narratives surrounding Indian migrant workers and election fraud, we observed multiple influencers using an identical or similar script. We suspect that a scriptwriter may be operating behind these internet celebrities to manipulate public opinion. Therefore, the legislature can enact a bill requiring public relations firms, netroots, and social media platforms to disclose their sources of funding and expenditures. Consequently, the source of financial support behind foreign information manipulation can be more transparently tracked and revealed, which could prevent foreign forces from manipulating information through such entities.
Administration
- Enhancing Continuity of the Targeted Response Mechanism: Our research indicates that foreign information manipulation often amplifies specific narratives and targets specific groups. Therefore, the executive branch should enhance the continuity of response mechanisms, not only addressing incidents when they occur but also providing targeted clarification messages to different groups. This approach can increase the chances of affected groups receiving truthful narratives.
- Establishing Effective Regulations: The executive branch should develop regulations for online platforms, influencers, and public relations firms that are effective, enforceable, and comparable to those covering traditional media. These regulations should cover areas such as financial transparency, content quality, information dissemination, and user privacy protection to ensure that public interest is respected and information is made transparent.
- Increasing Civic Engagement in Public Affairs: The government should encourage and facilitate greater citizen participation in public affairs to counter the influence of false narratives such as conspiracy theories. Through educational campaigns and public outreach, the government can raise awareness and improve understanding of political, social, and economic issues, enabling citizens to approach information more rationally and improve their ability to discern inaccuracies. Additionally, the government should provide more avenues and platforms for citizens to engage in policy-making and deliberation. Such actions could bolster public trust and participation in policy-making processes and enhance citizens’ resilience against false narratives.
Political Parties
- Focusing on Democratic Politics and Healthy Competition: Our research has highlighted the significant polarization among supporters of different political parties on various issues. To address this, political parties should prioritize espousing democratic principles and fostering healthy competition within the political landscape. They should also engage in policy discussions with rationality and objectivity, steering clear of extreme or divisive rhetoric on social media platforms. By setting a positive example, parties can contribute to a more constructive political environment.
- Raising Awareness of Information Manipulation: Political parties must remain vigilant against foreign interference and information manipulation, particularly from the coordinated efforts of anonymous accounts and the dissemination of fake news. Instead of dismissing or attributing all discourse to external manipulation, parties should engage in transparent discussions with the public, addressing concerns and fostering informed debates. Increasing public awareness about information manipulation can empower individuals to discern and counter misinformation, which would further safeguard the integrity of the political process.
- Preventing Harm to National Interests from Conspiracy Theories: Political parties and leaders must be mindful of the potential damage conspiracy theories can have on national interests, including diplomatic relations and defense. Racially discriminatory or unfounded remarks, such as those made against Indian migrant workers, can tarnish the country’s international reputation and jeopardize coordinated efforts with other nations. Similarly, false claims of election fraud can erode public trust in democratic institutions. By promoting reasoned and factual discourse while refraining from spreading baseless rumors, parties can mitigate risks to national interests and foster a climate of trust and cooperation.
Civil Society Organizations (CSOs)
- Promoting Effective Communication: CSOs can play a crucial role in promoting effective communication strategies across various fronts. On the policy front, collaboration with researchers can lead to policy recommendations targeted at policymakers, legislators, and practitioners. Establishing robust communication channels and utilizing common research frameworks can facilitate the timely sharing of threat intelligence. Additionally, incorporating quantitative methods can improve understanding of foreign operation patterns and trends. Producing easily understandable case studies and organizing seminars, workshops, and publicity events can empower individuals targeted by such operations by raising awareness and effectively countering the spread of conspiracy theories and misinformation.
- Enhancing International Cooperation: Given the escalating geopolitical tensions, there’s a growing concern that information manipulation could target international cooperation efforts. CSOs should advocate for stronger collaboration internationally. This involves active participation in international organizations, fostering partnerships and exchanges with citizens’ groups and research institutions in other countries, and sharing relevant information to mitigate attacks on international cooperation efforts. By taking a coordinated approach, stakeholders can strengthen their defenses against external threats and safeguard global stability and cooperation.
Fact-checking Initiatives
- Establishing a Mechanism for Compiling Experience and Analyzing Patterns: Fact-checking organizations can compile past experiences and establish mechanisms for analyzing patterns to continuously collect, organize, and analyze events and patterns of foreign information manipulation. Moreover, organizations can more accurately identify and respond to attacks from foreign entities.
- Expanding Cross-Sector Cooperation: Fact-checking organizations should continue to expand cooperation with other relevant entities, including civic groups, experts, media outlets, and social media platforms. By establishing mechanisms for collaboration and information sharing, they can collectively address foreign information manipulation and strengthen their responses.
- Introducing Information Forensics Tools: To combat the growing threat of misinformation, including deepfakes and AI-generated content, fact-checking organizations should adopt advanced information forensics tools for tracing, analyzing, and responding to foreign information manipulation. These tools can improve the speed and accuracy of identifying and mitigating misinformation, which can ultimately bolster resistance to foreign interference.
Media
- Enhancing Journalists’ Professionalism: Media outlets can better train journalists by focusing on skills such as fact-checking and identifying information manipulation, which would ultimately improve the credibility and quality of reporting. Journalists should be adept at recognizing common tactics like conspiracy theories and traffic generated by inauthentic accounts. Moreover, journalists could draw from past cases and trends of foreign information manipulation to ensure they deliver accurate and reliable news.
- Regulating Communication on Social Media Platforms: Media organizations should prioritize responsible communication on social media platforms by instituting guidelines for community editors to prevent the spread of emotionally charged or incomplete content. By doing so, they can accurately and objectively disseminate information across these platforms.
Social Media Platforms
- Blocking the Infiltration of Inauthentic Accounts: Community platforms should intensify their monitoring and blocking of inauthentic accounts and bots used in foreign information manipulation. This would help prevent the manipulation of information and its influence on public opinion through inauthentic accounts. Additionally, such actions could combat illegal activities such as online fraud. Strengthening identity verification mechanisms and employing technologies like machine learning can enhance the platforms’ ability to detect and remove inauthentic accounts.
- Establishing a Joint Defense System: Community platforms can establish partnerships with government agencies, CSOs, academic institutions, and other relevant stakeholders to create a unified defense system against misinformation. This coordinated effort can involve sharing threat intelligence, best practices, and technical tools to better combat the spread of disinformation.
- AI Labeling: We found that theAI and deepfake technologies were heavily used to produce election-related content, which could mislead voter perceptions and undermine the integrity of the election process. To address this, social platforms can label AI and deepfake content to help users identify manipulated content and enhance the transparency of information. This labeling can mitigate the impact of disinformation on voters, safeguarding the freedom and fairness of elections.
These recommendations and actions can help stakeholders form a joint defense force, which would effectively counter the threat of foreign information manipulation during elections.
Thank you to our partners (in alphabetical order):
- Austin Wang, Assistant Professor, Department of Political Science, University of Nevada
- Billion Lee, Co-founder, Cofacts
- CAPI System Development Team, Taiwan Institute for Governance and Communication Research (TIGCR), National Chengchi University
- Chat for Taiwan
- Chung-Pei Pien, Assistant Professor, International College of Innovation, National Chengchi University
- Cosmopolitan Culture Action Taichung
- Economic Democracy Union
- Kuma Academy
- Li-Hsuan Cheng, Professor, Department of Sociology, National Chengchi University
- MyGoPen
- Taiwan Media Watch Foundation
- Taiwan Youth Association for Democracy
- TeamT5
- Yi-Hsiang Shih, Taiwan Association for Human Rights (TAHR)
- Yi-Ting Wang, Associate Professor, Department of Political Science, National Cheng Kung University
- Yu-Ru Lin, Associate Professor, and Ahana Biswas, PhD student, PITT Computational Social Dynamics Lab, University of Pittsburgh’s School of Computing and Information
- Zonghong Lin, Research Fellow, Institute of Sociology, Academia Sinica
Appendix
Appendix I. Analysis of Suspicious Information
During the observation period from October 1, 2023, to January 31, 2024, a total of 10,629 suspicious information were recorded. Not all instances of suspicious information were necessarily involved in information manipulation; however, analyzing them can still assist researchers in understanding major controversies and trends in the current public discourse.
Issues Related to Suspicious Information
After identifying, recording, and compiling suspicious information, we categorized it into the following key issues: inevitable reunification across the strait, cross-strait and diplomatic relations, national defense and Taiwan Strait warfare, criticism of the Democratic Progressive Party (DPP), ineffective governance, controversial events involving political figures, problems with the democratic procedure, and U.S. skepticism. Information on these issues shows the following trends: before December, suspicious information had a broader scope and focused on more issues, including the inevitable reunification across the strait, criticism of the DPP’s governance and the party as a whole, national defense and Taiwan Strait warfare, and skepticism of the United States; after December and as election day neared, the focus shifted to controversial events involving individual political figures, questioning democratic procedures, and issues related to cross-strait and diplomatic relations.
Channels of Suspicious Information
In this project, we recorded the channels of suspicious information, including news websites, social media platforms, and video content, totaling 938 different entities. After classifying channels based on indicators such as whether they come from the PRC or abroad, anonymity, coordination, and the authenticity of personal accounts, they were categorized into six categories. Channels not belonging to these six categories were classified as “Other.”
- PRC State Media (6.6%):
Accounts are identified based on the list announced by the Chinese National Internet Information Office and media directly affiliated with central or local PRC entities. - Chinese Commercial Media (0.3%):
Chinese media whose organizational structure is not directly affiliated with central or local PRC entities. - Anonymous Overseas Fan Pages / TikTok Accounts / YouTube Channels / X Accounts / Weibo Influencers (15.9%):
Accounts that do not reveal the user’s real identity or the administrator’s location, as indicated on the Fan Page, are not in Taiwan, or their displayed location is not in Taiwan. - Suspected Anonymous Overseas Fan Pages / TikTok Accounts / YouTube Channels / X Accounts (1.7%):
Accounts that do not reveal the users’ real identity. Although the administrator’s location is shown as Taiwan on the Fan Page or is not displayed, the account’s bio and content contain many Simplified Chinese characters, Chinese terms, misspelled names, and views that align with PRC standpoints. - Anonymous Taiwanese Political Commentary Fan Pages / TikTok Accounts / YouTube Channels / X Accounts (24.8%):
Accounts that do not reveal the user’s real identity, and no overseas location, simplified characters, or language errors are detected. Their posts involve Taiwanese politics or edited clips of opinions from influential figures in Taiwan. - Inauthentic Accounts / Coordinated Accounts / Highly Suspicious Anonymous Accounts (23.5%):
Accounts that do not reveal the user’s real identity and disguise themselves as regular individuals. They may further engage in coordinated behavior to publish articles or comments, influencing public perception of specific issues. - Taiwanese Influencers / Taiwanese Media / Taiwanese Politicians (16%): Accounts belonging to Taiwanese influencers, media, or politicians; such accounts do not necessarily participate in information manipulation voluntarily and actively, but they may share suspicious information by joining relevant discussions on certain topics.
- Other (11.2%): Accounts not falling into the above categories, including some individual accounts that are difficult to prove as inauthentic accounts and foreign media not affiliated with China or Taiwan.
Appendix II. Investigation of Suspicious Groups
Several suspicious groups were identified when pursuing the aforementioned cases of foreign information manipulation. We conducted in-depth investigations on those that were or were suspected to be from outside the country. Additionally, we identified the profiles and behavioral patterns of the suspicious groups.
Cambodia Coordinated Behaviour Group
Since late October 2023, a Cambodian syndicate operating on Facebook has sought to manipulate public opinion in Taiwan and influence the election’s outcome. Their strategy has involved tapping into existing issues and narratives within Taiwanese society, exacerbating social tensions by injecting resources, and spreading controversial election-related information across Facebook Groups. By weaponizing foreign Facebook Fan Pages and inauthentic accounts, they employ tactics similar to those observed in the dissemination of COVID-19 misinformation in 2021 and attempt to sway local elections in 2022. This foreign group demonstrates a nuanced understanding of Taiwan’s prevailing public sentiment and has adeptly appropriated local vernacular and exploited domestic issues to foment discord. Moreover, they have demonstrated the capability to adapt popular Taiwanese songs to disseminate their contentious narratives. However, their content occasionally betrays their origins with Simplified Chinese characters, and some of their agendas serve the PRC’s interests rather than reflecting the typical political discourse among Taiwan’s parties. Such agendas include casting doubts on democracy and maligning Taiwanese human rights activists imprisoned by the PRC. Notably, none of the administrators responsible for disseminating this vast amount of information are based in Taiwan; the majority come from Cambodia, with others from China and Malaysia. Their modus operandi consists of three primary steps: fabricating misinformation, distributing it through Facebook Fan Pages, and amplifying its reach by sharing it through numerous inauthentic accounts in social media groups. Through their operations in late 2023, this Cambodian syndicate has leveraged existing narratives in Taiwan to target the DPP, its presidential candidates, and propagate skepticism, all in an effort to sway Taiwan’s public opinion towards an outcome favorable to the PRC.
Hashtag Group
In this election, we witnessed numerous instances of inauthentic accounts coordinating their activities on Facebook and using identical hashtags to manipulate discussions on specific issues. These encompass controversies such as imported egg disputes, country-of-origin labeling of U.S. pork, unauthorized construction in Lai Ching-te’s hometown, sex scandals in the DPP, the arrival of 100,000 Indian migrant workers in Taiwan, investigations into trade barriers and the ECFA, volunteer service on the battlefield, and allegations of Hsiao Bi-Khim acting as a U.S. army supervisor. Graphika’s comprehensive report delves deeply into this phenomenon, having identified over 800 Facebook accounts, 13 Facebook Fan Pages, 1 TikTok account, and 1 YouTube channel involved in these activities. The evidence strongly suggests the deceptive nature of these accounts, with many of them already suspended by social media platforms, further casting doubt on their authenticity. Graphika’s analysis indicates that these information manipulation efforts predominantly favor the KMT while targeting the DPP and TPP. Our own observations align with this conclusion: the primary targets of these coordinated efforts are the DPP, followed by the TPP, with occasional praise directed towards KMT candidate Hou Yu-Ih.
Suspicious Anonymous Foreign Facebook Pages
During our observation period, we encountered a group of suspicious Fan Pages suspected to have origins outside of Taiwan. Despite their administrators being located in Taiwan, these pages frequently posted content in Simplified Chinese characters and used terms commonly used in China, with some administrators having records indicating Chinese origins. These pages often shared engaging graphics commenting on Taiwan’s current events and distributed them widely within Taiwan’s public communities. According to Facebook’s publicly available information, all administrators of these Fan Pages claimed to be from Taiwan and presented themselves as exercising free speech when critiquing the government and its policies. However, upon closer scrutiny, there is a strong likelihood that these Fan Pages originated from outside Taiwan and were manipulated by entities in China seeking to influence public opinion within Taiwan. Several indicators support this conclusion. First, these Fan Pages occasionally featured Simplified Chinese characters, likely stemming from errors in text conversion. Second, the language and idioms used on these pages occasionally included phrases uncommon in Taiwan but typical in various regions of China. Lastly, our long-term observation revealed that some of these Fan Pages, initially administered by individuals from China, have gradually replaced their administrators with individuals claiming to be from Taiwan. This tactic may be an attempt to evade Facebook’s scrutiny regarding the location of fan page administrators, but the content disseminated by these pages still serves the interests of the PRC.
Footnotes
- The term “Peach Blossom Culture” (桃色文化) here is used to characterize instances involving politicians engaging in extramarital affairs or participating in morally questionable sexual activities, including activities related to the solicitation of sex workers.
- In Simplified Chinese, both “hair” (头发) and “find” (发现) share the character “发.” However, in Traditional Chinese, “hair” is represented by “頭髮,” and “find” is written as “發現,” indicating a distinction between the characters in the two writing systems.