Moderation Minimalism: The Cautionary Tale of Meta and the Rohingya Genocide

Dag
tomipioneers
Published in
12 min readNov 1, 2023

In my last post about disinformation, we explored how Russia has skilfully weaponized tactics to manipulate global politics and even target individuals. We began discussing the murky waters of internet moderation, and the conundrum it presents for a project like tomi, which aim to champion unbiased online freedom.

Today, we shift our lens to another focal point of this complex issue — Myanmar. As we venture further into our series on disinformation and misinformation, I want us now to focus on an approach to moderation that I’d like to call ‘moderation minimalism.’

Advocates of this approach argue against strong control or filtering on the internet. They imagine an online space that is completely free, where people can say what they want to say without much interference. The goal is to create a space full of diverse opinions and viewpoints, without the need for strict rules or oversight from an authority. On the surface, it sounds very much like the values we uphold at tomi; but as you’ll see, the reality is far more complicated and filled with potential dangers.

In particular, we’ll examine Meta (formerly known as Facebook), and its alleged role in the Rohingya genocide. The case illustrates how a minimally moderated platform can inadvertently — or perhaps, negligently — facilitate the spread of disinformation with devastating real-world consequences. It poses critical questions that we, as a community committed to constructing an alternative internet, cannot afford to ignore.

So, fasten your seatbelts as we delve deeper into the intricate tapestry of disinformation, moderation, and their societal repercussions. Myanmar serves as a stark case study, laying bare the high stakes and ethical dilemmas inherent in the approach of ‘moderation minimalism.’ It forces us to confront unsettling questions that go to the core of our aspirations for a new alternative internet — one that champions freedom without sacrificing safety. As we try to understand the lessons that Myanmar’s troubling narrative offers, we must scrutinize how they can shape our collective efforts to construct a digital realm that is both equitable and secure.

But before we wade into the complex dynamics of social media’s role in Myanmar, it’s crucial to grasp the historical and cultural context. Up next, we’ll explore who the Rohingya are, and why their story is inextricably tied to the ethical conundrums of internet moderation.

The Rohingya: A Stateless Minority

The Rohingya are a Muslim ethnic minority who have been living in Myanmar’s Rakhine State for generations. Despite this, they have been effectively denied citizenship by the Myanmar government, making them one of the world’s largest stateless populations. The situation escalated in 2017 when the military initiated a campaign that resulted in extreme violence, including murder, rape, and the burning of villages. The exodus was so vast that nearly 700,000 Rohingya fled into Bangladesh in just three months, surpassing all expectations. The campaign has been termed a genocide by various bodies, including United Nations investigators, the U.S. Holocaust Memorial Museum, and most recently by the United States itself.

The Myanmar Military: A Dominant and Violent Force

The Myanmar military has long been a haunting presence in the country’s political landscape. The atrocities committed in 2017 led to the death of more than 9,000 people, and their violent acts continue today. Since their coup in 2021, they’ve killed at least 1,670 civilians and detained more than 12,000. Although global pressure is escalating, with accusations of genocide against key military figures, the reality on the ground shows that they are far from letting go their iron-grip over the country.

Political instability and ethnic strife are sadly familiar elements on the world stage, but the devastation unleashed upon the Rohingya takes the crisis to a unique and horrifying level. This is not merely the outcome of ancient ethnic tensions or local politics gone awry — it is the product of a digital world where falsehoods can be weaponized to execute real-world atrocities.

The Digital Landscape in Myanmar

It’s essential to pull the curtain back on Meta’s specific role in Myanmar. What is often overlooked is that the company viewed Myanmar not just as an emerging market, but as a goldmine for user data and ad revenue.

In a country where a SIM card once cost hundreds of dollars and mobile usage was among the lowest in the world, Facebook saw an opportunity. When Myanmar started to liberalize its telecom sector in 2011, it led to rapid adoption of mobile phones as prices plummeted. So, when Facebook entered the country, they struck deals with local phone providers so that smartphones would come with the Facebook app pre-downloaded. On top of that, they allowed its app to be used without using any internet data. This ‘free’ access to Facebook, coupled with Myanmar’s newfound telecom freedom, made the social media platform an immediate hit. Facebook quickly became so integral to Myanmar’s digital landscape that it’s now used by more than half of the country’s 54 million people.

Given this established reliance on Facebook, a disturbing dimension emerges: For a significant portion of Myanmar’s population, Facebook wasn’t just a social media platform — it was synonymous with the internet itself. According to Max Fisher in ‘The Chaos Machine,’ as of 2016, 38% of the population claimed that Facebook was their primary source of news. For these people, Facebook was not just a platform for casual chats or cute cat videos; it was their window to the world, their platform for political discourse, and their source of information, accurate or otherwise.

This intertwining of Facebook and daily life in Myanmar wasn’t merely a cultural phenomenon; it had far-reaching consequences. In a society where more than one in four people relied primarily on Facebook for news, disinformation campaigns found fertile ground. And this wasn’t just ‘fake news’ in the banal sense of the term; this was orchestrated, targeted disinformation with a very real human cost.

The “Buddhist Bin Laden” and the Downfalls of Facebook in Myanmar

Enter Ashin Wirathu, the self-proclaimed “Buddhist Bin Laden,” a Buddhist monk who has masterfully manipulated Facebook’s sprawling network to spread his divisive and hate-filled messages. Wirathu’s reach is not to be underestimated; with thousands of followers on Facebook and tens of thousands of views on his YouTube videos, he serves as a poignant example of the platform’s potential to be exploited for malicious ends.

The content of Wirathu’s posts and sermons is a toxic mix of falsehoods, inflammatory rhetoric, and outright bigotry. He is known for sharing hate-filled rants that target Myanmar’s Muslim minority with baseless claims, from alleging that they “target innocent young Burmese girls and rape them” to asserting that every town in Myanmar has a “crude and savage Muslim majority.” According to an article by Kate Hodal in The Guardian, Wirathu’s soft-spoken words have incited violence, fueled misinformation, and spread religious intolerance across a nation still grappling with democratic governance.

Facebook’s platform, already a major playground for Wirathu, was further manipulated into becoming an echo chamber of anti-Rohingya content, a phenomenon you can read more about in one of my previous medium articles. Operatives linked to the Myanmar military and radical Buddhist nationalist groups also exploited Facebook’s algorithmic vulnerabilities. Posts warned of an impending Muslim takeover, labeled human rights activists as “national traitors,” and circulated openly threatening and racist messages.

From Hate Speech to Real-world Consequences

And it wasn’t just unconventional actors involved. Senior General Min Aung Hlaing, the Commander-in-Chief of Myanmar’s own military, shamelessly declared on his Facebook page that “We openly declare that absolutely, our country has no Rohingya race”. His digital rhetoric seamlessly transitioned into real-world action when he later seized power in a coup, underscoring the lethal consequences of online hate speech metastasizing into tangible societal upheaval.

When provocative figures like Ashin Wirathu or Min Aung Hlaing intersect with Meta’s algorithms — engines designed not to counter disinformation or hate, but to maximize clicks and shares — the result is a volatile cocktail poised for disaster.

The Algorithmic Accomplice

Designed to maximize user engagement, these algorithms act as silent puppeteers, curating our virtual experiences based on historical data and predictive analytics. In doing so, they inadvertently construct an environment conducive to the proliferation of hate speech and misinformation.

The often unspoken truth is that these algorithms are the lifeblood of Meta, driving ad impressions and bottom-line revenue. It’s a deeply ingrained business model that has, until now, paid little to no attention to its far-reaching socio-political implications. Can a system engineered to capture attention and incite emotions ever truly be neutral?

The answer is glaringly evident in the Rohingya crisis. Meta’s algorithms, in their zeal for engagement, created echo chambers brimming with toxic hate speech and propaganda. They not only surfaced content that exacerbated pre-existing prejudices but also extended its reach, amplifying radical voices like Ashin Wirathu and Senior General Min Aung Hlaing.

The Human Toll: From Digital Hate to Real-World Violence

So, what is the result of all this hate and this internet climate filled with unchecked disinformation, conspiracy theories, and made-up lies about the Rohingya? The tragic outcome has been a cascade of violence and human rights abuses that can’t be ignored or overstated.

In 2012, two surges of violence convulsed Rakhine State, spurred by the murder of a Burmese woman and the killing of 10 Muslim pilgrims — events that were perversely twisted into escalations and calls for retaliation against the Rohingya. Officially, the violence claimed 192 lives, injured 265, and left 8,614 houses destroyed, but these numbers are widely suspected to be underestimations. Even the Myanmar security forces, who were expected to be impartial protectors of law and order, were implicated in this violence, either by their complicity or active participation. From Sittwe to Kyaukpyu, witnesses described security forces preventing Rohingya or Kaman from extinguishing houses set ablaze. In Maungdaw, the same security forces conducted mass arbitrary arrests, subjecting those detained to torturous conditions.

Displacement reached catastrophic levels. Over 140,000 people, predominantly Rohingya, were expelled from their homes. Years later, 128,000 Rohingya and Kaman remain essentially imprisoned in camps, with their basic human rights — freedom of movement, access to food and healthcare — unapologetically denied.

These atrocities didn’t sprout in a vacuum. They were fueled by a toxic internet atmosphere, where figures like Ashin Wirathu, a Buddhist monk with hundreds of thousands of followers on Facebook, shared graphic images of decaying bodies he said were Buddhist victims of Rohingya attacks, among many other false narratives that only aggravated the societal division. His MaBaTha organization spearheaded campaigns of misinformation that turned into real-world violence — not only in Rakhine but also in places like Meiktila and Mandalay.

So when Human Rights Watch proclaims that the government has failed to ‘stare down the people who are inciting hatred,’ it’s not just an institutional failure we’re talking about. It’s a systemic one, perpetuated and amplified by social media giants like Meta, who allow these breeding grounds of hate to persist.

A Decade of Negligence: Facebook’s Failed Myanmar Strategy

The role of Facebook in Myanmar’s socio-political landscape has been a subject of significant concern, particularly in light of the social media platform’s potential to amplify hate speech and disinformation. Aela Callan, a foreign correspondent who had been studying these issues in Myanmar, took it upon herself to notify Facebook of its platform’s exploitation for spreading hate speech as far back as November 2013. Her persistence led her to multiple follow-up meetings at Facebook’s Menlo Park headquarters, involving not just her but also representatives from Myanmar’s tech civil society organizations.

Despite Callan’s attempts to show Facebook “how serious it [hate speech and disinformation] was,” the company was notably sluggish in its response. This was exacerbated by the fact that at the time, Facebook had only a single Burmese-speaking content moderator based in Dublin, Ireland. The individual was responsible for reviewing flagged content in a market with millions of users. Callan observed that Facebook appeared to prioritize its growth and “connectivity opportunity” in the Myanmar market over the very real risks and “core issues” of hate speech.

Critics argue that Facebook’s initial laxity was a harbinger of the problems it would face in the years to come. The social media giant demonstrated a slow response time to posts violating its standards and was ill-equipped, both in staffing and cultural understanding, to handle the surge of hate speech in the country. This inadequacy was not just a failure of ethics but also a strategic misstep for a multi-billion-dollar company with global reach.

In recent years, Facebook has claimed to adopt a more multi-faceted approach to serve its Myanmar users, including hiring more Burmese-speaking moderators and improving reporting tools. However, the damage had been done, thousands of people have already died. The United Nations criticized Facebook for its role in Myanmar’s crisis, noting that the platform had “turned into a beast,” serving as a breeding ground for hate speech and disinformation. The UN even indicated that the platform’s conduct “bears the hallmarks of genocide.”

Mark Zuckerberg admitted that the company had to improve its performance and outlined a three-pronged approach to address the issues specific to Myanmar. Despite this, the criticism remains that his admission came too little too late and lacked sufficient accountability for the long-standing issues the platform facilitated.

Meta’s Inaction and Legal Ramifications

In the aftermath of their lackluster response to the crisis in Myanmar, Facebook (now Meta) has come under increasing legal scrutiny. The company ignored multiple warnings, neglected internal studies, and even bypassed temporary restrictions imposed by Myanmar authorities. This inaction has led to a growing list of legal complaints against the social media conglomerate.

Amnesty International, a global human rights organization, has taken the lead in demanding accountability from Meta. Specifically, Amnesty has launched campaigns urging Meta to fund educational projects aimed at rehabilitating the Rohingya community. The initiative stems from the idea that corporations like Meta should not merely be profit-driven entities but must also bear social responsibility, especially when their platforms can catalyze real-world harm.

In a legal view point, Meta’s inaction could potentially be seen as “reckless endangerment,” given that they were alerted multiple times about the abuse of their platform for spreading hate and yet failed to take sufficient action. The argument gains further traction when one considers the United Nations’ strong language, which linked Meta’s conduct to “hallmarks of genocide.” Such an association could pave the way for a litany of lawsuits, both civil and potentially even criminal, to be filed against the company.

Amnesty International’s demand for reparations in the form of educational projects is a recognition of the long-lasting impact that misinformation and hate speech can have on a community. These projects aim not only to rebuild the socio-cultural fabric torn by divisive narratives but also to equip the community with the critical thinking skills necessary to counteract disinformation in the future.

However, one could argue that funding educational projects is merely a band-aid solution to a much larger problem. Critics contend that until Meta fundamentally changes its content moderation policies and invests in local, nuanced understanding of the socio-political climates in which it operates, any reparations will remain superficial.

In closing remarks, Meta’s sluggish and inadequate response to the Myanmar crisis serves as a cautionary tale for us here at tomi. It’s a stark reminder that a lack of responsibility can lead not only to ethical failure but also to potential legal ramifications that can tarnish a brand’s reputation irrevocably. The calls for accountability are growing louder, and whether Meta chooses to heed them could set a precedent for corporate responsibility in the digital age.

Rethinking Moderation Minimalism

The disturbing events in Myanmar facilitated by Meta underscore an inescapable reality: moderation minimalism isn’t just flawed — it’s dangerous. The ideal of open, unfettered discourse may sound appealing, but its real-world ramifications are anything but. The algorithms that once promised to democratize information instead amplified hate speech and misinformation through filter bubbles. We’ve documented how such an approach has resulted in the propagation of hate speech, misinformation, and tragically, even loss of life.

The call, therefore, isn’t for extreme moderation but for a nuanced approach that respects individual freedoms while also preventing the viral spread of harmful content.

The role of platforms like Meta in shaping public opinion and, in some instances, influencing real-world events, cannot be underestimated. Both the tech companies and the larger public have a shared responsibility to ensure that the internet remains a space that is open, but not at the expense of human lives. This is not to say that for certain, the tomi’s ecosystem will one day become a breeding ground for hate or disinformation, I personally don’t believe it will. But this article is more a call to attention: We need to be prepared as a community for all kinds of scenarios on how our preferences for freedom of speech and community-governance can be used against us. How our inclination to believe that people will use our platforms for societal or community good can become manipulated to serve a more sinister cause. Inevitably we need to take a stance, and we need to prepare ourselves on how we would act in such a scenario. The slow response, and the neglect to take on responsibility by Meta in their role of the Rohingya genocide shows us how vulnerable these platforms can be, and how impactful social media platforms can be in turning hateful speech into real world consequences.

We must transcend the naive belief that minimal moderation will inherently lead to an ideal marketplace of ideas. It’s a utopian vision that, when put to the test, unravels into a dystopian reality. As we continue to develop the tomi ecosystem, we need to internalize the lessons learned from such dark chapters in internet history. We have the responsibility, both as technologists and as a global community, to build systems that are resilient against the spread of hate and misinformation, while still upholding the sanctity of free speech.

We can — and must — do better.

Follow us for the latest information:

Website | Twitter | Discord | Telegram Announcements | Telegram Chat | Medium | Reddit | TikTok

--

--