Governing the Internet: The rule of law in decentralized regulation

Nicolas Suzor
DMRC at large
Published in
9 min readMay 25, 2016

--

Facebook has been in the news this month after allegations that its trending topics, which concentrate viewer attention to a handful of emerging stories, are biased against conservative politics.

The company has strongly denied the allegations, and has produced copies of its guidelines, met with conservative politicians, and conducted an internal investigation to allay fears. Tarleton Gillespie has an excellent overview of the controversy.

To be clear, it’s unlikely that Facebook has a massive regime deliberately designed to sway the political opinions of its users. What’s worrying, though, is that we can’t tell for sure. Likewise, the processes and algorithms that govern our online experiences are always biased to some degree — but we never really know how and to what extent.

Facebook’s obligations towards transparency

Back in 2009, Facebook suffered a massive backlash over the way it proposed to change its rules without adequately consulting its community. Responding to criticism, Mark Zuckerberg pledged that from then on, Facebook users would have direct input on the development of the site’s terms of service:

Our terms aren’t just a document that protect our rights; it’s the governing document for how the service is used by everyone across the world. Given its importance, we need to make sure the terms reflect the principles and values of the people using the service. […]

Since this will be the governing document that we’ll all live by, Facebook users will have a lot of input in crafting these terms.

This experiment with democracy did not last long. The threshold for voting was set unrealistically high (30% of its active users); when less than 1% of Facebook’s one billion users voted on a policy change in 2012, Facebook rolled back its commitment to direct user input. Zuckerberg’s comments were quietly disavowed, attributed now to a former employee.

Legally, Facebook has no obligation to consult on its Terms of Service. Contract law is clear: by using the site, you agree to be bound by the terms. If you do not wish to be bound, do not use the site.

The concern this week about Facebook’s trends, however, point to a larger moral problem.

The governance of cyberspace

This year marks 20 years since John Perry Barlow’s Declaration of the Independence of Cyberspace. Barlow’s Declaration is a rousing piece of writing. If you haven’t read it before or in a while, go ahead and take a look — it’s worth it. I’ll wait for you.

Photo by Joi Ito, CC BY 2.0, https://commons.wikimedia.org/wiki/File:John_Perry_Barlow.jpg

Set aside, for the moment, the fact that Governments are steadily getting better at regulating the internet. States around the world have been getting more and more sophisticated about how they control and monitor the physical infrastructure of the internet.

The real lasting part of Barlow’s message is the hope he expresses that the governance of online social spaces may “be more humane and fair than the world your governments have made before”.

We believe that from ethics, enlightened self-interest, and the commonweal, our governance will emerge.

If we take Barlow’s hope seriously, we have a lot of work to do.

First, many parts of the open internet are, to put it mildly, not nice places. The infrastructure we’ve built allows everyone to speak, but all-too-often drowns out and silences voices from the more vulnerable groups in our societies. It is used as a highly effective tool to direct abuse and hate against minorities, to invade the privacy of those who speak out, and to enable violence, chilling threats, and coordinated attacks. In the words of danah boyd,

“We didn’t architect for prejudice, but we didn’t design systems to combat it either."

Groups like Women, Action, and the Media!, the Online Abuse Prevention Initiative, the Cyber Civil Rights Initiative, and Without My Consent are leading the charge to convince social media platforms and other online service providers to re-architect their systems to combat abuse on their networks.

Meanwhile, the powerful lobby groups that represent intellectual property owners are continuously seeking to have their interests embedded in the infrastructure of the internet. Graduated response or ‘three-strike’ schemes are being introduced around the world to require Internet Service Providers to do more to police consumer copyright infringement. Website blocking regimes have been introduced in many countries, allowing Rightsholders to ask courts to require ISPs to try to block access to sites — like The Pirate Bay — that facilitate the infringement of their rights. Rightsholders are this month seeking to require content hosts to do more to actively police content on their networks by changing ‘notice & takedown’ regimes to ‘notice & staydown’.

At the same time, civil society groups are also trying to convince social media platforms, search providers, internet service providers, and content hosts to do more to resist obligations from Governments and private actors to hand over private information about their users and censor internet content. Organizations like Ranking Digital Rights are creating indexes of how well telecommunications companies are committed to protecting freedom of speech, and groups like OnlineCensorship.org are trying to track the impact of decisions by social media platforms to remove content from their networks. A growing number of declarations, like the Manila Principles seek to pressure Governments and service providers to protect freedom of speech and due process. Even the United States Commerce Department has released a multistakeholder statement on best practices in notice and takedown systems.

Online intermediaries are caught in the middle of these competing agendas, and there is no easy way out.

The missing rule of law

What’s missing in all of this mess is a way to talk about the responsibilities of telecommunications and digital media organizations. As Gillespie points out in response to the explosion of concern over Facebook’s algorithms,

“We clearly don’t yet have the language to capture the kind of power we think Facebook now holds.”

Zuckerberg was right when he said that Terms of Service documents are the “governing documents” of our age. These contracts govern how we communicate, who we can communicate with, whose voices are heard, how our information is shared, and how we access information.

But Terms of Service documents are not like the constitutions that restrict Government power. At law, they are mere consumer contracts, and in substance, they rarely limit the power of the social media platform or internet intermediary who drafts them.

We have not yet had the constitutional moment for online governance that John Perry Barlow hoped for. We have not had the conversation about when, and in what circumstances, we think that the power of digital media platforms over our lives should be limited.

The problem is that most of our theory and laws about constitutional restrictions on power apply only to nation states.

The Rule of Law is the concept that those who exercise power over our lives must exercise it in a way that is equal, certain, and fair. In this sense, the Rule of Law is a core human right.

Finding a way to interpret and apply the values of the Rule of Law to the corporations that provide our new public spheres — the social media platforms, the content hosts, the search engines, and those that provide the infrastructure — is what needs to change to make sense of the growing concerns about the way that Facebook and others exercise their power.

The responsibilities of platforms

This is not to suggest that these corporations ought to be bound by the same high standards to which we hope to hold governments accountable. These firms are not democracies, and we would not want them to be.

But what we do need is a more frank discussion about how the decisions that impact are lives are made. This is fundamentally about transparency and due process: people have a right to know the standards that are applied by companies to moderate, amplify, and manipulate our conversations, and should usually have a clear avenue to challenge decisions that are unfairly made.

This is an area where Facebook and other social media platforms have come under fire in recent times. To their credit, many of the larger platforms have taken steps to simplify their Terms of Service, more clearly set out their rules for acceptable conduct, and work more closely with advocacy groups to limit abuse on their networks. In practice, though, the way that decisions are made is still opaque.

The lack of transparency in day-to-day governance has fueled a great number of controversies over how standards are enforced. Facebook, for example, have been heavily criticized for censoring images of mothers breastfeeding, ceremonies of indigenous elders, and plus-sized models in bikinis for breaching its prohibitions on nudity and ‘health and fitness’, but not naked images of Kim Kardashian. Twitter has been criticized for not doing enough to curb rampant abuse, even when its targeted advertising systems allow malicious users to directly target minority groups for hateful harassment. The list goes on and on.

Original image by Micol Hebron.

Many of these firms are trying hard to deal with the concerns of their users and the public. Many have established multistakeholder coalitions, like the Global Network Initiative, the Anti-Cyberhate Working Group, Twitter’s Trust and Safety Council, Facebook’s Safety Advisory Board and other groups to try respond to criticisms over their governance processes.

But these firms are worried, too — and for good reason. Many of these platforms can only exist because the companies that provide them are insulated from legal liability for what users do online, and the law does not impose an obligation on them to monitor the massive amount of content that we post everyday. These firms also know that as they become more transparent, they will face more criticism from those who disagree with the substance of their policies.

Making governance legitimate

There are no easy answers here — we cannot simply make firms wholly responsible for shaping the way we communicate without eroding the enormous benefits to freedom of expression that the internet gives us.

But what we can do is continue the dialogue about how we want our shared online spaces to look.

Part of this conversation begins with the realization that businesses have obligations to protect human rights. The United Nations Guiding Principles on Business and Human Rights explain that respecting human rights requires businesses to “avoid infringing on the human rights of others” and to “address adverse human rights impacts with which they are involved.” Groups like the Internet Rights & Principles, Ranking Digital Rights and the Dynamic Coalition on Platform Responsibility have sought to begin the conversation about how human rights of freedom of speech, access to information, privacy, freedom from violence, discrimination, and hate speech, and due process rights might be protected in online networks. A growing number of efforts internationally are seeking to embed ‘Internet Bills of Rights and similar instruments into national and international law.

The other key part of this conversation requires us to do more to work out how decisions are being made. Before any talk of how we might envisage our collective future to look, we need more evidence about what is currently happening. In recent years, many companies have begun to release transparency reports that provide some insight into how they govern their parts of the internet, and how they respond to demands from law enforcement agencies and private actors to hand over user information or to censor content. We need more of this. But we also need to know more about the things we are not currently being told — like the government requests made under secretive national security arrangements, and the decisions that platforms make based on their own investigation into alleged breaches of their terms of service. This information is not public, even in summary form, and this is a real obstacle to informed debate.

Facebook’s trending topics processes fall into this category. We can’t have a frank and informed discussion about how digital media companies influence our access to news and current affairs without knowing what they currently do. The outrage we saw this month is not just about this specific issue — it’s a complaint about fairness and perceptions of bias that goes to the fundamental concern about legitimacy that the Rule of Law addresses.

At their core, many of the concerns about how our online spaces are governed come down to the basic principle that the power that is exercised over us should not be exercised in a way that is arbitrary or capricious. Greater transparency is the first step to developing systems that can help us flourish. Twenty years after Barlow’s Declaration of Independence, this project has never been more important.

--

--

Nicolas Suzor
DMRC at large

I study the governance of the internet. Law Professor @QUTLaw and @QUTDMRC; Member of @OversightBoard. All views are my own. Author: Lawless (July 2019).