Social Media and the Alabama Special Election

Jonathon Morgan
6 min readJan 2, 2019

--

In light of the recent reports about the so-called “Alabama Project,” I want to share my point of view as a data scientist and researcher.

Background

Since 2013 I have analyzed how the internet can shape extreme political and social beliefs. My work started with a technology non-profit called Ushahidi to understand conflicts around the world from common social media platforms (for example, Facebook, Twitter, Tumblr and YouTube). We were some of the first researchers to discover that organizations like Syrian rebel militias and even terrorist groups like ISIS were using everyday social media tools like Facebook pages and YouTube channels to recruit new members and weaponize the internet.

In March 2015, I co-wrote a paper on this topic for the Brookings Institution. The report, The ISIS Twitter Census: Defining and Describing the Population of ISIS Supporters on Twitter, studied how ISIS had used social media to recruit and radicalize members.

At the same time, a discipline called Countering Violent Extremism (CVE) emerged to identify the root causes of radicalization and explore “counter-messaging” strategies that would, in theory, inoculate vulnerable populations from extremist rhetoric. The right effort could even deradicalize individuals before they committed acts of violence.

This strategy received bipartisan endorsement by many in the U.S. government, including Congressional leaders.

Social media platforms like Facebook and Twitter developed policies and systems that dramatically reduced terrorist content on their platforms by 2016. With better control over terrorist content by the platforms, our attention turned to the increasing political extremism and polarization in the United States — from neo-Nazis to extreme-left groups — and the potential they could be weaponized by hostile foreign governments. We also wanted to know how “social media filter bubbles,” in combination with polarizing, repetitive, and artificially amplified rhetoric, influenced broader American public opinion.

My Research with American Engagement Technologies (AET)

In spring 2017 I was still one of a relatively small group of researchers quantitatively studying these social media dynamics. Because of that work, I was introduced to Mikey Dickerson and Sara Hudson, who were founding an organization with similar goals to the State Department’s Global Engagement Center. However, instead of countering foreign jihadist extremism (which is the goal of the GEC), it would focus on US political polarization and the vulnerabilities of social media platforms.

By this point the Director of National Intelligence had already released an assessment of Russian interference in US political discourse.

Researchers were beginning to understand that Russia’s intention had been to amplify divisive rhetoric to sow discord amongst Americans. There was an urgent need to understand and combat this problem from a domestic point of view as well.

I proposed a research project to AET that I believed would reduce political polarization. I also proposed ways in a very limited test to see how liberals and conservatives responded to a variety of social media posts and memes. We set out to determine whether counter-messaging, delivered from credible news sources, such as The Washington Post and Fox News, could break through the information bubbles that surround Facebook users.

We chose to conduct the experiment during the Alabama Special Election for the U.S. Senate. When we designed the experiment Luther Strange, the incumbent Republican front-runner, seemed like a shoe-in to win the seat. And even when Roy Moore became the Republican nominee, we regarded this as a safe seat for the GOP. It was clear that Moore was an exceptionally popular politician in Alabama who had won several statewide elections. Even after allegations of misconduct against Moore published by The Washington Post and others, support for his candidacy remained strong.

There has since been a document described in the media from AET — the full version of which I have not been allowed to see — that does not mention me or my firm, but seems to conflate my research project with some broad, grandiose political claims that are unrelated to anything I worked on. I acknowledge working with AET, but I don’t recognize the claims they’re making now.

We did not write the leaked report and we could not have because it didn’t reflect our research. The leaked version of the report made a number of claims that did not originate with us.

  • We do not recognize the mention of an effort to move 50,000 votes by suppressing unpersuadable Republicans. We didn’t suppress votes — we provided links to news stories that might be relevant to voters.
  • We do not recognize the report’s mention of an effort to “manufacture approximately 45k Twitter followers, 350k Retweets, 370k Tweet Favorites, 6k Facebook Comments, 10k Facebook reactions, 300k Imgur upvotes and 10k Reddit upvotes.” Any effort to connect us to such activities is a lie. New Knowledge did not engage in the use of Twitter at all in the Alabama election — and we did not “manufacture” followers against Roy Moore.

What I’ve read or been told of the AET report sounds like marketing material written for political donors. It does not describe the intent or outcomes of our research project. So let me set the record straight about my research in Alabama:

  • We created a Facebook news source, “Alabama Conservative Politics,” that linked to credible news organizations like The Washington Post and Fox News.
  • We used our real names as page administrators.
  • Typical posts only attracted dozens to hundreds of “reactions” (typically “likes”).
  • When we made a small purchase of Facebook’s built-in advertising tools, we received reactions in the low thousands (again, typically “likes”).
  • Less than 2% of users clicked through to read any of the articles we posted.
  • In total we spent approximately $30,000 on Facebook advertising (for context, a reported $51 million was spent during the Alabama special election).

During the race, many researchers, myself included, noticed that the Roy Moore campaign had attracted bot followers on Twitter, whose account usernames were nonsense words in the Cyrillic alphabet. I assumed at the time that this was the work of internet trolls — because genuine state sponsors of disinformation are adept at appearing to be domestic commentators.

At no time did New Knowledge get involved in any use of Twitter bots (or bots on any other platform) in the Alabama election. To this day, we have no knowledge of who did this or why.

Again, we only conducted a small, limited research project on Facebook.

Since the story broke, people have come out of the woodwork to tell me about the projects they ran or knew about that also took place during the Alabama special election — including self-described “false flag” operations on Facebook to which neither I nor New Knowledge had any connection.

Other efforts unrelated to New Knowledge included “tests” designed to negatively impact Republican voter turnout.

One outside effort included a case study of “information warfare” in Alabama.

As for our part, New Knowledge was asked by AET to test the impact of counter-messaging on a small number of people. Reading accounts of AET’s report, I now deeply regret getting involved with that organization. I am angered by the way my work has been conflated with the claims in AET’s report.

Regardless, we can and should learn from the research.

Key Findings

The creation of a news Facebook page and using Facebook’s advertising tools to amplify the page’s content was straightforward. Our findings were discouraging but informative:

  • Countering fake news with articles grounded in facts had minimal to negative effects on page engagements, regardless of content source.
  • More polarized rhetoric increased the likelihood that a post would be shared.
  • The politically polarized election environment appeared to significantly reduce the probability that a user would follow a page from credible sources, regardless of the advertising spend by that page.
  • Posts “boosted” using Facebook’s advertising tools were shared at half the rate of non-boosted posts (.05% on average), implying that advertising did not encourage organic sharing.
  • Fact-based content that countered users existing political beliefs was ignored or simply entrenched users’ pre-existing political beliefs.

Implications for 2019 and Beyond

The implications are that counter-messaging may not in fact be an effective tool to counter polarization. Even by proxy of a credible messenger, moderating content that is inconsistent with users’ existing beliefs is ignored or even scorned.

While the belief in the CVE community is that large-scale counter-messaging might have a global impact on radicalization or hyper-polarization, our research has found that this maxim, while idealistic, is probably not true.

For the social media platforms, this is a continuing struggle to understand how best to address the evolving tactics employed by sophisticated adversaries to evade detection and erode our democratic process because social networks do not always result in the filtering of truth from fiction. We cannot expect a strategy to naively rely on disseminating facts alone; there are far more complicated issues at play that arise from the social network itself.

Subsequently, many of the policy recommendations made to the Brookings Institution in 2015 still apply. Social media platforms continue to be slow to evolve, and it is essential that partnerships between those platforms, government, and independent researchers inform future policy decisions.

--

--