Regulating Online Platforms: By Whom, and for Whom?

Stanford’s GDPi at RightsCon 2018

Roya Pakzad
Stanford's GDPi
6 min readMay 24, 2018

--

Toronto was sunny on the second morning of RightsCon, as over seventy people entered a large conference room at the Beanfield Center. The attendees came from over a dozen different fields: RightsCon is famous for its ability to bring together everyone from activists and researchers to members of parliament, general counsels of giant companies, and even heads of state. The room was there to hear about a question that is central to our collective digital future: As online platforms gain a level of power and significance that rivals that of nation states, what steps can we take to protect freedom of expression online, on a global level?

Before reading further in this blog post, try a brief thought experiment. Imagine a map of all of the actors in this ecosystem. Who decides whether this blog post attracts government attention? Who decides when to take it down? To censor it?

If you’re from a country like mine — and a significant percentage of the world’s citizens are — your government routinely makes a judgement about which pieces of content fit its chosen narrative and which doesn’t. If you’re lucky enough to live in a country with freedom of expression enshrined in law and protected by law enforcement, you can have the luxury of reading what you’d like. But, is it that straightforward?

The Global Digital Platform and the Nation State panel at RightsCon. Photo credit: The Centre for International Governance Innovation (CIGI) Twitter account.

On May 17th, our Stanford GDPi’s Executive Director, Dr. Eileen Donahoe, brought together a dynamic group trying to map the decision-makers who have the power to protect free expression online.

The panel began with Daphne Keller of Stanford’s Center for Internet and Society, who posed questions regarding content removal. She eloquently described the complexities of the role that both national laws and platforms’ Terms of Service play — sometimes in alignment with one another, sometime in conflict — to moderate contents online. In the case of content removal by national laws, Keller asked, “is there a Human Rights Law problem when States rely on police, rather than courts, to interpret law and tell platforms what to remove?” Likewise, she asked, what happens when a “platform uses its Terms of Service to ban speech under any substantive policies it chooses?”

The second panelist, Evelyn Aswad of the Oklahoma University School of Law, emphasized the significance of maintaining international human rights law standards when states propose domestic legal measures to act upon content moderation online. Take Germany’s NetzDG law as an example. The law, which went into effect earlier in 2018, demands that social media companies remove “illegal speech” within a specified period. Because of the law’s ambiguity, there has already been a wide range of unnecessary content removal by companies in order to avoid being potentially penalized. German legislators claim the law is a result of complying with the European Union’s regional human rights system. But as Aswad emphasized, “complying with a regional system doesn’t necessarily mean complying with international human rights law.” Aswad argued that national laws should use the “least intrusive means” to preserve free expression among civil groups which frequently disagree about what constitutes acceptable norms of speech. While regulating “illegal speech,” lawmakers should also be cautious about the possibility of creating new smaller forms of sheltered eco chambers, which might be one result of such haphazardly implemented laws.

Next, the audience heard from Emma Llanso from the Center for Democracy & Technology (CDT). Llanso started by remarking on the change in framings of human rights online from 2012 to 2017. “In 2012,” Llanso noted, “the UN Human Rights Council issued a statement [noting] ‘the same rights that people have offline must also be protected online.’” However, she continued, “by 2018, there has been a flipped framing: ‘what is illegal offline is illegal online.’” [link] She also echoed former panelists’ points on the ambiguities surrounding companies’ content removal effort based on their Terms of Service. Who can demand take downs? Through which channels? How are demands rejected or accepted? Take Twitter’s transparency report from July to December 2017 as an illustrative example. Out of 4,294 take down requests by the Turkish government, only 466 of them were ordered by Turkish courts!

David Kay, UN Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression

Rounding out the session was David Kay, UN Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression. Kay emphasized the role of Human Rights Law not just as a universal norm, but as a strategic plan for both governments and technology companies. He spoke about the concerns of several member states of the European Union regarding the increasing power wielded by US-based corporations in European countries. Human Rights Law, he argued, can provide a common ground for working out these differences. “As they say, Winter is Coming,” quipped Kay. “Regulation is coming.” He stated tech companies need to embrace the UN Guiding Principles for Business and Human Rights to strategically guide their business toward respecting human rights of their users and seeking remedies in cases of human rights violations.

The final participant was Edward Santow, who serves as the Human Rights Commissioner for the Australian Human Rights Commission. Santow began his talk with a vivid scene: Australia in the 1930s, in the days when Australians were fighting the native grey-backed cane beetle, which had devastated crops throughout the country. Australian government officials, working with biologists, came up with an unusual solution: the cane toad. But importing this creature caused an entirely new challenge. The toads themselves proliferated, becoming a massive problem in their own right. So how is this example relevant to freedom of expression online? Santow raised the anecdote to shed light on the unexpected consequences of bad regulations. Sometimes, an action taken with the best intentions can actually do more harm than good because it destabilizes a complex system.

Santow also took up the issue of cultural specificity when it comes to human rights. Facebook has a feature to provide a photo memorial for those who died. But among some Australian indigenous groups, displaying an image of a person who has died is considered damaging and insulting. Both of these stories remind us that states’ regulations and companies’ Terms of Service involving online expression are sometimes a very broad brush. How can we maintain the principle of universality while also remaining attuned to the oftentimes very significant differences between how different global societies behave — and expect others to behave — online? As the case of the cane toad attests, sometimes a solution that has worked in other contexts does not find success in different soils. Regulators, then, need to strive for universality, but also avoid a “one size fits all” mentality when it comes to the details of how these regulations are implemented.

As these summaries suggest, the discussion was deep and wide ranging. The input from questioners in attendance, along with the panel respondent Fen Osler Hampson from the Centre for International Governance Innovation, emphasized some of the common points of agreement while also pushing the conversation into new directions. As perfectly stated by Hampson: “Vagueness coupled with heavy laws harm free speech.”

Hampson also emphasized the importance of directly involving developing countries in these discussions.

Your humble narrator comes from a developing world country herself. And if you asked me, as I left the room in Toronto, whether the world will truly be able to agree on a common set of regulations for preserving freedom of expression online, my answer would have been “no.”

However, I’m hopeful by discussions like the one that GDPi hosted at RightsCon. After all, we can aim for an ideal while still remaining realistic about substantial challenges. The problem of finding common ground even emerged during the Q&A, when the attendees ran into the issue of the different meanings of “architecture”: are we talking about computer architecture — a specific term used in computer chip design — or architecture as a legal framework?

The question was not merely a linguistic confusion. It reflected the different disciplinary origins of the participants: engineers learn how to study complex systems just as regulators and legal scholars do, but they approach these systems with a different frame of mind and a different knowledge (and jargon!).

Here, at GDPi, the whole purpose of incubator is to open up new channels of communication. To find new commonalities in mission, values, and vocabulary. To create a common ground for discussing issues that impact all of us. We are hoping to host more discussions like this in the future, with the goal of bridging longstanding gaps between different disciplines, communities, and countries when it comes to debates around digital rights.

If you’re interested in getting involved with these types of discussions, please get in touch.

Our next event, Humane-Centered AI: Building Trust, Democracy and Human Rights by Design, takes place on June 11th at Stanford University.

You can sign up to attend here.

--

--

Roya Pakzad
Stanford's GDPi

Researching technology & human rights, Founder of Taraaz (royapakzad.co)