Watch Our Neighbourhood: Collaborative Approaches to Address FIMI during the India Election
Jerry Yu / Senior Analyst, Digital Intelligence Team
Key Points
- During India’s 2024 elections, Doublethink Lab collaborated with fourteen partners from the Indo-Pacific region, conducting observations and investigations on potential Foreign Information Manipulation and Interference (FIMI) incidents across platforms from May 25, 2024, to August 31, 2024.
- The PRC employed strategies to undermine Indian democracy, divide democratic allies, and discredit adversaries, leveraging social media platforms, PRC state media, PRC state actors, and influencers to propagate anti-India and anti-Modi narratives.
- A batch of inauthentic accounts on X, formerly Twitter, was created close together during April and May 2023, following a pro-China account which shared a video of “Manipur India known as small China” narrative.
- Three main themes were identified based on the dataset of incidents during the election, primarily focusing on undermining Indian democracy, national security and defence issues, and US scepticism.
- 19 pro-PRC originators initially posted with regard to three topics: (1) the India-China border clash; (2) India’s heatwaves; (3) the sale of BrahMos missiles to the Philippines on X, formerly Twitter. Subsequently, 91 bot accounts with limited followers retweeted at least two of the three topics.
Introduction
During India’s 2024 elections, which were scheduled between April 19 to June 4, Doublethink Lab collaborated with numerous organisations and individuals, including researchers, scholars, and journalists from India, Malaysia, the Philippines, Australia, and Japan. Together, we coordinated the India Election Observation (IDEO) project targeting Foreign Information Manipulation and Interference (FIMI)[1] cases during India’s elections. To increase the capacity of each partner, the project organised an eleven-hour training course from May to June in 2024. The course trained a total of nine partners by Doublethink Lab, five mentors in the Indo-Pacific region. The European External Action Service’s (EEAS) Strategic Communications division has provided technical support for this project. Specific training was given by the DISARM Foundation. The course covered core knowledge including India’s political context, PRC Social Media Platforms, the hierarchy of PRC state media and ranking system in the PRC, Open-Source Intelligence (OSINT) skills and tools, the DISARM framework, and the FIMI threats framework from the EEAS.
With nine partners and five mentors, we collectively conducted observations and investigations on potential FIMI incidents regarding India’s elections across platforms including Facebook, X (formerly Twitter), Weibo, YouTube, WeChat, and the websites of blogs and news media outlets, from May 25, 2024, to August 31, 2024. The indicators we used to flag potential FIMI incidents included:
- Disinformation or conspiracy theories regarding India’s elections or political issues.
- Existing narratives propagated by PRC-related actors.
- Involvement of PRC-related actors.
- Coordinated inauthentic behaviour (CIB).
- Trending topics and extraordinary high interactions, including number of likes, share, view, etc.
- Involvement of suspicious accounts already under tracking.
- Potentially negative impact on the domestic society.
- Narratives intended to create division among political allies.
The project involved a total of 14 participants, including four from India, four from Malaysia, three from the Philippines, two from Australia, and one from Japan. The participants were arranged into two roles: mentors and new partners. The mentors were partners with whom we had collaborated in previous projects and who have experience in investigating and documenting FIMI incidents. They were invited to provide training to the new partners and to lead teams in conducting investigations related to FIMI incidents. The new partners, on the other hand, were recommended by the mentors to join the observation team, where they received investigation training and conducted the investigations.
Analysts from Doublethink Lab were responsible for planning the project schedule, organising the training curriculum, tracking the progress of the investigations, and providing guidance to the investigations. At the end of the project, analysts also integrated the findings from each team and published the final report.
Methodology
In this project, we observed multiple media platforms — Facebook, X (formerly Twitter), Weibo, YouTube, WeChat, and the websites of blogs and news media outlets. The published dates of content we collected covered a six-month period, from January 1, 2024, to June 30, 2024, across all of these platforms. A total of 346 observables and 325 channels were recorded during this period.
Nine newly recruited partners, guided by five experienced mentors, conducted systematic observations on the targeted platforms. Data collection involved keyword-based searches[2] for relevant content. Starting from May 25 through June 26, 2024, suspicious content was manually recorded by the team according to the indicators. Partners recorded suspicious content through online forms, which were subsequently aggregated into a master sheet around 65 observables. After discussion between mentors and Doublethink Lab’s analysts, the observables were classified into several topics, including: (1) “Modi admits he is sent by God to win the election,” (2) “Modi wishes to surrender on the China-India border issues to maintain peace with China,” (3) “Modi stokes China-India border tensions to gain votes in the election,” (4) “The democracy of India is declining under Modi’s ruling,” (5) “The US is dividing India,” (6) “India’s missiles are useless,” and (7) “India disrupts the South China Sea situation.”
Following the identification of these observables and their categorization into topics, each team moved to an investigative stage. The recorded observables became crucial leads for understanding the dissemination patterns, threat actors, and channels involved in cases of information manipulation. Mentors guided the partners in using Open Source Intelligence (OSINT) methods to investigate the spreading processes of these topics across the platforms.
The OSINT investigation entailed the collection of corroborating evidence, which was systematically documented in a FIMI spreadsheet. This spreadsheet served as a central repository for intelligence relating to each case, including information on the observables, targeted entities, key events, attack patterns, threat actors, and the specific narratives propagated.
X (formerly Twitter) emerged as the most frequently recorded platform during this investigation. To gain deeper insights into the dissemination patterns on this platform, social network analysis was employed. This analysis focused on mapping the retweet networks of accounts identified in the FIMI spreadsheets, particularly focusing on the connections of key groups and actors within the manipulation campaigns.
The investigation process was further structured through the application of the ABCD(E) framework,[3] which provided a systematic approach to analysing FIMI incidents. Due to the limited timeline of the project, the measurement of the effect was not included in the project. This framework can be conducted to analyse and assess the critical information of each operation, including the actors involved (A), behaviour (B), content ©, and degree (D). The analysis regarding the ABCD(E) framework will be elaborated in detail in the case summary chapter.
Case Summaries
In this section, we will use the ABCD framework to analyse the cases we found during the project. For each part of the model, we consider the cases together.
Actor
In this project, the primary threat actors identified were the People’s Republic of China (PRC) and Pakistan. Of the 325 recorded channels, 24% were attributed to state-linked entities, including PRC state media, scholars associated with the People’s Liberation Army (PLA) Academy, and PRC-affiliated think tanks. In contrast, while only one channel was directly linked to a Pakistani state actor, 14 out of 18 Urdu-language channels, one of Pakistan’s official languages, were found to be involved with coordinated inauthentic accounts. These accounts were either created within the past five years or were not listed in the BBC’s Pakistan media guide, like Karakoram Times, suspected to be illegitimate. The Facebook accounts linked on its X profile show zero likes, zero followers, or are deleted, while its website link redirects to an unrelated news site.
Behaviour
Based on observations from partners in India, the sarcastic phrase “India, the so-called largest democracy in the world” has been used to exploit social divisions through the dissemination of videos depicting violent behaviour and content that degrades India’s democracy and economy on social media platforms. This phrase circulated not only on PRC platforms but was also amplified by Pakistan’s state media, and coordinated accounts on X. Many of the accounts we identified amplifying this sarcastic phrase on X were inauthentic, having been created primarily within the last two years, using the same slogans and rapidly sharing state media-aligned videos and images.
Meanwhile, our partners in Malaysia discovered that the PRC used state media and state-affiliated influencers to misinterpret an interview with Indian Prime Minister Modi, which was then circulated on PRC media platforms. For instance, PRC state media, China News, distorted a story from Newsweek Magazine, suggesting that Modi had indicated a desire to peacefully resolve issues with China. Subsequently, various influencers amplified this narrative, falsely claiming that Modi wished to surrender on PRC media platforms.
In another case, PRC state media Huanqiu and Shao Yongling, a Senior Colonel and professor at the PLA Rocket Force Command College, distorted a narrative from NDTV news, portraying Modi as a charlatan. In Shao’s narrative, she tried to convince readers that Modi had claimed to be chosen by God to become Prime Minister of India. She published these articles on various PRC social media platforms and pro-PRC patriotic media, with influencers further amplifying her narratives on PRC blogging platforms.
In addition to investigating anti-Modi narratives, another team of partners started from Meta’s Quarterly adversarial threat report Q1 2023 and Q1 2024. The China-based inauthentic accounts which were revealed by Meta focused on issues related to the India-China border and targeted the global Sikh community. Following the reports, four coordinated inauthentic accounts were found to be actively operating on X, aiming to amplify existing tensions in India’s northeast regions. By expanding the pool of coordinated inauthentic accounts, we searched keywords regarding the India-China border clash, such as Manipur, Nagaland and Arunachal Pradesh[4]. A batch of inauthentic accounts, created close together in April and May 2023, was identified. These accounts followed a pro-China account that had shared a YouTube video titled “Manipur India known as ‘small China’ once the impact of independence on India?” At least six inauthentic accounts in the network were found to have made only one post each, within minutes of one another, between July 22 and 23.
By searching the key phrase “印度天然的組成部分” (the natural component of India) on X, a network of pro-PRC accounts was identified, with “互fo” (follow back, or follow for follow) in their usernames, an indicator of inauthentic follow train behaviour. These accounts copied and pasted a statement from PRC Ministry of Foreign Affairs spokesperson Lin Jian from May 25 in a coordinated manner. The coordination involved tweeting on closely related dates, using the same person’s image as profile pictures, with four of the accounts being created on the same days: October 11th, 2023, and January 9th, 2024. The statement denied claims by India’s Minister of External Affairs, Subrahmanyam Jaishankar, who had asserted that Arunachal Pradesh is a natural part of India. Lin Jian countered by stating that before India’s illegal occupation, China had always maintained effective administrative control over the southern Tibet region (Arunachal Pradesh). Most of these accounts were created in January 2024. However, the network was primarily found promoting pro-PRC narratives in simplified Chinese on X.
Bangladesh media Parbatta News reported that Prime Minister of Bangladesh Sheikh Hasina claimed a white man from a foreign country had attempted to carve out “a Christian state like East Timor” by taking parts from Bangladesh and Myanmar. However, PRC state media, CGTN, distorted the story to “the CIA’s possible plot to carve out a nation in South Asia” on news websites and social media platforms. Meanwhile, Pro-PRC accounts amplified the PRC’s propaganda on Facebook and PRC blogging platforms.
Content
During the Indian election, three main themes were identified based on the dataset of incidents, primarily focusing on undermining Indian democracy, national security and defence issues, and US scepticism.
Undermining Indian democracy
One of the most common themes across the analysed incidents was “undermining Indian democracy.” This theme was observed in five incidents. Narratives connecting to this theme involved the false claim that Indian Prime Minister Modi admitted to being sent by God to win the election, and the claim that India’s democracy is declining under his leadership. Other narratives connecting to this theme included accusations of “election fraud,” labelling India as “a democratic country economically lagging behind China,” framing the election as “the world’s most expensive,” portraying Modi as a dictator, and suggesting that India has become increasingly autocratic. Furthermore, Indian politicians were accused of focusing only on elections, while neglecting citizens who have died from heatstroke.
National security and defence
A second set of narratives related to national security and defence issues, distorting Modi’s stated intentions. This theme was observed in five incidents. Narratives connecting to this theme claimed that Modi either sought to surrender on the India-China border disputes to maintain peace, or was stoking tensions along the border to gain votes in the election. Regarding Manipur and Arunachal Pradesh regions, PRC propaganda tried to shape a historical view that Manipur is known as “small China,” and that Arunachal Pradesh has never been part of India.
Additionally, PRC state media sought to shift the blame for tensions in the South China Sea on India by accusing the country of destabilising the region through the export of the BrahMos missile, co-developed with Russia, to countries such as Vietnam, Armenia, and the Philippines. Pro-PRC actors further amplified claims that the BrahMos missile is ineffective and unworthy, focusing criticism merely on India while omitting any mention of Russia. They also promoted the idea that the United States and India view the Philippines as a pawn, dividing the relationships between the three countries.
US scepticism
The theme of “US scepticism” also surfaced during the investigation. Bangladesh’s Prime Minister Sheikh Hasina claimed that a white man from a foreign country had attempted to carve out “a Christian state like East Timor” by taking parts from Bangladesh and Myanmar. However, PRC state media, CGTN, distorted the story to claim that the CIA has a plot to carve out a nation in South Asia, on news websites and social media platforms. This narrative was observed in one incident.
Degree
For the efforts of our partners from India, several Indian languages, including Hindi, Punjabi, Tamil, and Urdu, were detected to better understand the dissemination of pro-PRC narratives within the country. Despite the blocking of some inauthentic accounts within India, PRC information manipulation reaches Indian audiences. PRC propaganda in Chinese, Hindi, English, and other Indian languages circulate on social media platforms by leveraging the existing fissures in Indian society and to influence the electoral choices of the audience.
During the investigations, 90.4% of the observable data was published on social media platforms, websites, and video-sharing platforms. Social media platforms accounted for 60.1%, with platforms such as X, Facebook, and Weibo being the most frequently used for distributing malicious content. X was the platform most recorded by the partners, comprising 73.1% of the total social media platform data.
Consequently, partners from Thinkfi.net, an Indian Tech Start-up developing machine learning based tools to analyse Coordinated Inauthentic Behaviour across different social media platforms, analysed the retweet network of three main pro-PRC accounts, which amplified narratives on three key topics: the India-China border clash, India’s heatwaves, and the sale of BrahMos missiles to the Philippines. Their analysis uncovered a disinformation and pro-PRC propaganda network from three accounts in the dataset. The operation started with PRC media and pro-PRC accounts, such as @thinking_panda, @CarlZha, and @zhao_dashuai as originators, posting pro-PRC propaganda. Subsequently, bot accounts with limited followers retweeted or engaged with these posts as disseminators, amplifying the existing narratives. These accounts followed coordinated patterns, often producing little original content, and functioned primarily as amplifiers.
The network graph below shows the retweets network for the 37 tweets published by the pro-PRC originators regarding three topics: (1) India-China border issue; (2) Heatwave; (3) BrahMos missile issue. Each node with a tweet ID corresponds to a tweet, and nodes with usernames represent the users who have retweeted the original tweets. Each edge means the relationship that a user retweets a tweet. Lighter coloured nodes indicate higher numbers of retweets. Users who have only retweeted a single tweet have been filtered out. Within the network, 91 accounts were identified that retweeted at least two of the three topics, with most of their activity consisting primarily of retweeting. Although there are only 19 pro-PRC originators, these 37 tweets had been retweeted 3,656 times and had a total 3,728,002 views.
Conclusion
During the observation period, there were a total of eleven incidents related to India’s foreign affairs and domestic wedge issues, identified by fourteen mentors and partners from the Indo-Pacific. This marks the first occasion that Doublethink Lab has coordinated an election observation team with 11 hours of training, focused on FIMI outside of Taiwan.
With our partners’ efforts, this project had observed several cases relevant to Indian politics during the election period, involving the PRC and other threat actors. Through systematic observation and analysis, the project provides critical insights into the techniques and strategies employed by external actors to manipulate information over India’s political landscape. The PRC frequently employs strategies such as distorting narratives, dividing relationships between democratic allies, and degrading adversaries. This was achieved by leveraging existing narratives and conspiracy theories by coordinated inauthentic accounts on mainstream social media platforms, as well as through PRC state media, influencers on PRC blogging platforms, PRC social media platforms, and media websites. The narratives primarily claimed that Indian democracy is declining under Modi’s rule. We also observed narratives related to national security and defence issues, such as distorting Modi’s stated intentions to claim that he either sought to surrender on the India-China border disputes to maintain peace, or was stoking tensions along the border to gain votes in the election. Moreover, pro-PRC actors also expanded their focus beyond India. They targeted countries importing the BrahMos missile, including Vietnam, the Philippines, and Armenia, propagating the claim that the U.S. was orchestrating these actions behind the scenes. These narratives sought to undermine India’s defence partnerships and portray its military collaborations as provocations against China.
In this project, a mentoring system was introduced for new partners to speed up learning of the investigation process and the FIMI framework. However, there is still room for improvement. First, training and data collection for the partners took place near the end of the election period, meaning that we missed some key periods to collect relevant data. Second, even though we provided training, it was the first time for the new partners to conduct investigations under the FIMI framework. Effective execution will require sustained training and the accumulation of experience over time. As a result, no significant evidence of PRC-linked accounts manipulating discourse targeting authentic “Indian communities” was observed during the project. Nevertheless, PRC state media and PRC social media platforms actively disseminated narratives that distorted and discredited Modi and the Indian government. Similar narratives appeared on mainstream social media later, particularly on X. This indicates that the PRC’s domestic propaganda narratives have spilled over into mainstream social media outside of China.
Acknowledgements
The India Election Observation project would like to acknowledge the partners in the following list for their significant contributions that made this report possible:
- Albert Zhang, Senior Analyst, Australian Strategic Policy Institute
- Aries A. Arugay, Professor, University of the Philippines-Diliman
- C. Formoso, Independent Researcher from the Philippines
- Divyanshu Jindal, Independent Analyst from India
- Elena Yi-Ching Ho, Co-founder & Regional Lead, Research and Action Hub
- Nishit Kumar, Independent Researcher from India
- Sriparna Pathak, Associate Professor, School of International Affairs of O.P.Jindal Global University
- ThinkFi.net, India
- Tomoko Nagasako, Researcher, Information-technology Promotion Agency, Japan
- Yvonne T. Chua, Associate Professor of Journalism at the University of the Philippines
- 4 Independent Researchers from Malaysia
Additionally, we would like to express our appreciation for the invaluable contributions of the Indo-Pacific FIMI research network, supported by the European Union. Their support laid the foundation for the initial network, which has enabled our collaboration with mentors throughout this project.
This publication was supported by the European Union. Its contents are the sole responsibility of Doublethink Lab and do not necessarily reflect the views of the European Union.
Footnotes
- Foreign Information Manipulation and Interference (FIMI) describes a mostly non-illegal pattern of behaviour that threatens or has the potential to negatively impact values, procedures and political processes. Such activity is manipulative in character, conducted in an intentional and coordinated manner, by state or non-state actors, including their proxies inside and outside of their own territory.
https://www.eeas.europa.eu/eeas/1st-eeas-report-foreign-information-manipulation-and-interference-threats_en - Please see Appendix 1 for the full list of keywords.
- Pamment, James. September 2020. “The EU’s Role in Fighting Disinformation: Crafting a Disinformation Framework.” Working Paper of the Carnegie Endowment for International Peace. https://carnegieendowment.org/files/Pamment_-_Crafting_Disinformation_1.pdf
- Please see Appendix 1 for the full list of keywords.