World Wide Web of Violence

AK
7 min readApr 11, 2018

--

Is the government of Myanmar executing a coordinated social media strategy to justify genocide and incite further violence under the guise of self-defense?

Sven Scheuermeier on Unsplash

The Dirty “G” Word

The first time I read about the Rohingya crisis was in a January 2017 New York Times story. I made note of it on my blog, but I was so caught up in American politics — getting ready for the Women’s March in Washington — that I didn’t stop to really think about what was happening half a world away.

The topic fell off my radar until September 2017. Rohingya militants, after months of state-sponsored violence, attacked a police outpost and killed several officers. The response from the army was swift and brutal: beatings, gang rapes, village burnings. Hundreds of thousands of refugees fled to Bangladesh. Though the government prevented journalists from reporting on the ongoing humanitarian crisis, satellite images of razed villages and cell phone videos of mass graves supported the refugees’ accounts.

Even a casual observer could see that the regularly scheduled program of abuse and neglect had escalated into one of immediate expulsion and extermination. Finally, people — official types with diplomatic credentials — were using the word genocide.

Putting Together Puzzle Pieces

In February and early March, a few pieces of information crossed my path.

First, the Mueller indictments and independent research revealed the extent of Russian propaganda on US social networks. Trolls employed by the “Internet Research Agency” created and boosted Facebook pages, Tumblr accounts, and tweets on both ends of the political spectrum, including pro-Black Lives Matter material designed to incite racial strife and discourage minority voters.

Second, I went to a Knight Lab talk by West Virginia University professor Saiph Savage, describing her work tracking Facebook propaganda from militia groups in Mexico. Savage analyzed nine months of data from a public Facebook page to track “conversation themes, post frequency and relationships with offline events.”

Third, I read an op-ed in the New York Times from Nick Kristof where he describes the slower forms of genocide playing out in Myanmar: neglect, lack of healthcare access, malnutrition, unemployment. He also mentioned that “citizens often seem to have been manipulated by anti-Rohingya propaganda, particularly on Facebook” and maybe Russia, which supports the military of Myanmar, was involved as well.

Together, this begs the question: is all this Facebook/Russia stuff is woven into the web of violence in Myanmar? (Follow-up question: Who needs John le Carre when we are all living in a crazy espionage novel?)

Facebook’s Mess in Myanmar

At this point, we all know that Facebook and its CEO, reluctant android and sad-face-emoji model Mark Zuckerberg, are in deep trouble for the Cambridge Analytica privacy breach, among other things. What fewer people know is that Facebook has struggled to regulate hate speech in Myanmar for years, and there is credible evidence that failure to police their platform led to real-world violence.

The problem is two-fold (at least). First, Facebook is the internet in Myanmar, the country’s main source of news. From 2014 to 2017, Facebook grew from 2 million to 30 million users in Myanmar. Increasingly affordable smartphones come pre-installed with the app. Second, the government of Myanmar was already adept at spreading anti-Rohingya propaganda. The medium is new, about the flavors of hate speech are old: the Muslims are dogs, the Rohingya are illegal immigrants, they burn down their own homes and flee back to their native Bangladesh.

Mark Zuckerberg, in an interview with Ezra Klein at Vox, said that Facebook was ramping up its efforts to monitor, catch, and delete violent messages on its platform, particularly in Myanmar. He cited an example where Facebook messenger was used to spread rumors of violence to both sides of the conflict. Buddhists were told that Muslims were coming to kill them and Muslims were told that Buddhists were coming with machetes. According to Zuckerberg, Facebook caught and deleted those messages.

In response to that interview, a group of civil society organizations — activist watchdogs that help Facebook monitor hate speech in Myanmar — sent an open letter to Zuckerberg. They said that his comments were inaccurate. Facebook didn’t catch anything. Their consortium told Facebook about the messages, and Facebook failed to respond for days.

Zuckerberg immediately apologized to the authors of the letter from his personal email, but the issue touched a nerve. The civil society groups released a list of changes they believe Facebook must make to fix its violence problem in Myanmar, including adding local content reviewers, providing greater transparency about the nature and volume of flagged content, and creating a rapid-response system to remove offensive content in hours instead of days.

The Proof Is In The Creamy Propaganda Pudding

My big takeaway from the open letter? The civil society groups are maintaining a log of incidents they see and report to Facebook. There are local activists with language expertise and domain knowledge who are already monitoring Facebook content. Somewhere, they have a record of what they’ve seen and when.

I wonder if we can leverage their expertise to tag and organize data scraped from Facebook’s public API to answer the big question that’s hanging over this whole discussion: Is the government of Myanmar using a coordinated social media strategy to justify genocide?

Specifically:

  • Is there a quantifiable correlation between the type and volume of propaganda posted and shared on social media and real-world incidents of violence?
  • Are there specific claims (“illegal immigrants”, “burned their own villages”) that spread faster and further in the run up to major military operations against the Rohingya?
  • Is there a difference between the content government-affiliated pages post and the material that ordinary citizens share?
  • What fraction of messages that are tagged as violent come from anti-Rohingya groups and what fraction from pro-Rohingya militia? (Note that it is in the interest of the government to promote a “both sides” narrative.)

Proposed Methodology

Can we glean useful information about the spread of anti-Rohingya propaganda using public Facebook information, with the help of civil society groups like Phandeeyar?

The process might look something like this:

  1. Identify ultra-nationalist public figures in Myanmar with public Facebook pages active from January 1, 2017 to December 31, 2017. Select 1 or 2 of the most influential figures to analyze for this study.
  2. Using the public Facebook API, aggregate data from the target page(s), including posts, comments, shares, and likes.
  3. Label posts with help from local activists, native Burmese speakers, and international human rights groups. Categories could include common themes in anti-Rohingya propaganda: dehumanization (“they are dogs”), illegal immigration (“they are Bengalis”), character assassination (“they are rapists and terrorists”).
  4. Create timeline of verified violent incidents in 2017 from international agencies and journalists.
  5. Compare timeline of violence to timeline of Facebook content.

This is a lot of work to get started, but if you could devote an entire graduate thesis to this project, there are a few more angles to investigate.

First, you could scale up the project by using the manually tagged data to train a semi-supervised label propagation algorithm. You could then categorize posts from additional Facebook pages or sort text from comments on the original labeled posts without needing real people to individually tag every bit of text. (Note that this could increase the uncertainty in the analysis considerably.)

Second, you could start looking for pro-Rohingya Facebook pages. Russia’s strategy in the US wasn’t just to boost one side of the political spectrum. They were trying to sow discord in all camps. If Myanmar is using a similar strategy to drum up racial hatred, they would want a healthy online presence from (groups that appear to be) Rohingya militias spreading anti-Buddhist propaganda.

There’s probably a team of researchers at Facebook trying to address this problem right now. But recent news has made it clear that Facebook cannot be trusted to police itself in a fair and transparent manner. I think we need network researchers, social scientists, journalists, and activists to help us understand how propaganda spreads in a dense, closed network like Myanmar’s Facebook ecosystem.

The history of modern warfare has shown us that new weapons — biological, chemical, and now, I suspect, digital — are tested on vulnerable communities: the poor, the forgotten, the powerless. If we can understand the weapon, maybe we can use that knowledge to protect some of the world’s most vulnerable people.

Update 4/17/2018: Anti-Rohingya propaganda is showing up in India as well, another Muslim-minority country that has a history of post-colonial sectarian violence and is currently experiencing a surge in violent sexual crimes. Makes one wonder how prevalent anti-Rohingya propaganda is in India, and where it’s coming from.

Updates 4/21/2018: The New York Times is looking at this trend, Facebook-inspired violence, in Sri Lanka as well. The similarities are striking.

Update 4/30/2018: I received feedback on this proposal from Susan Benesch of the Dangerous Speech Project. Prof. Benesch noted that it would be extremely difficult to demonstrate causal relationship between nation-state actors and incidents of violence, even if we could get the data, sort it, clean it, and demonstrate correlation. Prof. Benesch also said that this is not an appropriate time to try to draw resources from NGOs in Myanmar because the situation in the country is so dangerous and their time is extremely valuable.

--

--

AK

Reformed chemist. Hanging out at the intersection of science, tech, design, and policy.