First Draft researchers Carlotta Dotto and Seb Cubbon, along with Stefano Cresci, Serena Tardelli and Leonardo Nizzoli from the Institute of Informatics and Telematics of Pisa, explore new approaches to the complex phenomenon of online coordination.
As researchers of disinformation, we inherit terms and concepts from platforms that can shape the way disinformation is understood and detected.
But often they reflect the needs of policy communications more than high-quality, independent research.
For robust research, we need our own concepts and detection methods — ones that are transparent, precisely defined and can be reproduced by other researchers.
we need our own concepts and detection methods — ones that are transparent, precisely defined and can be reproduced by other researchers.
What features of a community should define it as “coordinated?” How can we compare degrees of coordination, or examples of coordination from different contexts and events?
We set out one way to do this: quantitative indicators. In an investigation of coordinated online activity observed during the run-up to the US 2020 election, we used a set of quantitative indicators to precisely define and detect coordination. We show how it can lead to a better approach.
Quantitative indicators for measuring coordination
Platform-defined metrics such as CIB are not designed for independent research. And arguably they shouldn’t be: They exist to support platforms’ policies and their communication, which often requires definitions to be flexible.
It is therefore up to disinformation and social media manipulation experts to put forward independent frameworks for assessing online coordination, as organizations such as EU Disinfo Lab have begun to do. These frameworks should outline specific criteria that can be measured empirically.
This is where quantitative indicators are helpful. Detection models that use explicit quantitative benchmarks are not only more likely to identify coordination with greater accuracy, but also make research methodologies more transparent than an unexplained qualitative assessment. And they provide findings that are reproducible by others.
An example of a quantitative indicator would be something like this:
- When approximately X% of all retweets are identical, a community is defined as extremely coordinated
Methods and findings that are reproducible may attract greater participation and constructive collaboration in relation to the analysis of online coordination, encouraging the development of more widely adopted definitions and measurements.
Transparent and quantitative measures can also provide the foundation for difficult, qualitative judgements about whether coordination matters, and what to do about it. With precise measurements, the degree of coordination can be compared across communities and events (for example, elections) to inform action.
quantitative measures can also provide the foundation for difficult, qualitative judgements about whether coordination matters, and what to do about it
Relative measurements of the degree of coordination among users (for example, the degree of coordination within a community, rather than an absolute number of coordinated users) can be particularly helpful: They can help to uncover the extent to which a group of actors may be sophisticated, well-resourced and dedicated, even if the group is smaller in number.
In turn, this can contribute to a better theoretical understanding of the multifaceted role and impact of coordination in online information.
Case study: Quantitative indicators for coordinated communities on Twitter in lead-up to the US 2020 elections
Leading up to the US 2020 elections, there were many communities coordinating online, to varying extents and with a variety of goals. To uncover what kinds of coordination were taking place, researchers at the Institute of Informatics and Telematics (IIT-CNR) in Pisa adopted precisely the kind of quantitative metrics we have been considering so far.
The goal was to map different communities on a continuous scale and identify those coordinating most intensively and sophisticatedly.
Their method works by detecting coordinated communities on Twitter, based on the extent to which large sets of users repeatedly share (retweet) the same tweets across an extended period of time. Given that coordination is not clear-cut, numerical indicators estimate the extent of coordination for each community detected as “coordinated.”
The quantitative indicators that the team used were:
- When approximately 30–50% of all retweets are identical, a community is likely to be mildly coordinated
- When approximately 90% of all retweets are identical, a community is likely to be extremely coordinated
The model also takes into account the likelihood for a tweet to be retweeted in a coordinated manner. If a tweet has received many thousands of shares, the likelihood of two users both sharing that tweet is high and in turn, the likelihood that they coordinated to share the tweet is low. As a result, common shares of highly engaged-with tweets weigh less than shares of low-engagement tweets when it comes to the final calculation of the coordination scores.
The model was applied to 70 million tweets shared in the run-up to the US election. It could detect both mildly coordinated groups of users as well as extremely coordinated ones.
These quantitative measures were applied to identify groups of users that were coordinating their activity to consistently share the same US 2020-related tweets between October 3 and December 3, 2020. These groups of users or “communities” were then labeled based on the types of hashtags that featured most frequently in the tweets they commonly shared. We then represented these communities visually through a network visualization graph (see Figure 1).
Among other findings, the model enabled the discovery of a small yet highly coordinated network of users that immediately stood out from others.
As shown in Figure 1, two clusters of coordinated users are significantly bigger than the others. The largest one, which appears in blue, is composed of users who supported Donald Trump.
The second-biggest cluster, in orange, is also composed of Trump supporters, but these users shared many conspiracy theories as opposed to generic pro-Trump or pro-Republican tweets. For example, an analysis of the tweets shared by this community revealed widespread support for QAnon and the “Stop The Steal” narrative.
We can see in Figure 1 that most communities appear to be sharply separated from one another, indicating there might be strong coordination within a community, but little coordination (i.e., little sharing of the same tweets) among different communities.
We can also see that one small community, highlighted in purple, appears to be the most densely concentrated. It lies between both pro-Republican groups, yet is also closely linked to the large pro-Trump group.
These unusual characteristics, revealed by the quantitative measurements of the detection model, prompted us to conduct a deep dive into the community’s individual user profiles, the types of messages they were promoting and the sources of the tweets they were sharing most frequently.
This analysis revealed that unlike other traditional, hyper-partisan groups of users who coordinated their online activity to push pro-conservative or pro-democrat messages in the run-up to the election, this network used the electoral debate as an opportunity to generate support for a political cause seemingly far removed from US domestic politics: the independence of Biafra, a small former secessionist state that was reintegrated into Nigeria in 1970.
This pro-Biafran community repeatedly shared tweets that contained pro-independence messages alongside generic US 2020-related hashtags and generic pro-Trump hashtags such as #MAGA and #KAG2020.
Since Trump’s endorsement of Brexit in 2016, Biafran separatists have considered Trump a supporter of their cause and his presidency an opportunity to attract international support for renewed Biafran independence.
Setting standards for a consistent approach
There are three key points we draw from our reflections:
- Relative indicators can be extremely useful. They help enhance transparency and can point to the extent to which these communities may be well-resourced, dedicated and sophisticated in their behavior. Moreover, methodologies that support fine-grained analyses also make it possible to identify small coordinated networks that might otherwise go unnoticed.
- Methodologies that rely on quantitative indicators can help standardize how we measure, and by extension understand, coordination. This could allow rigorous comparisons between communities involved in a debate, and between analyses of coordination at play in multiple contexts, such as elections in different years or countries. We need more research to fully explore how this could work.
- Far from being the be-all and end-all, quantitative indicators provide a starting point from which additional qualitative analyses can be carried out. It is only through further investigation that the most critical questions concerning authenticity, harmfulness and legitimacy can be answered.
The original version of this post incorrectly referred to EU Disinfo Lab as EU vs Disinfo. This has been corrected.