Digital Dominance: Is Big Tech Doing Enough to Protect Human Rights?

Center for Media, Data and Society
The CMDS Blog
Published in
5 min readApr 26, 2018

By Laura Reed

The “big tech reckoning” may finally be upon us.

Source: https://rankingdigitalrights.org/index2018/

Many have argued for years that tech companies like Facebook and Google have grown too big and too powerful, threatening public interest. Now these companies are beginning to face public pressure to account for how their businesses have affected people’s ability to exercise their rights to freedom of expression and privacy.

Earlier this month, Facebook founder and CEO Mark Zuckerberg came under fire for how Facebook had mishandled the vast amounts of user information the company collects, allowing the data of 87 million Facebook users to leak to Cambridge Analytica. This followed months of debate and inquiry into how Facebook, Twitter, YouTube and others may have inadvertently facilitated Russia’s interference in the U.S. 2016 presidential election on their platforms. Shareholders of both Twitter and Facebook have released memos demanding the companies do more to address the problems of fake news and harassment on their platforms. And in less than a month, the General Data Protection Regulation in Europe comes into effect, which will require tech companies to be more transparent about and give users more choices to control the collection and handling of their user information.

While public pressure in the form of shareholder activism, Congressional hearings, and regulation seems to be mounting, it’s unclear whether these mechanisms will apply the right pressure to hold companies accountable for their massive influence over people’s ability to exercise their rights.

What are the world’s largest tech companies doing to take responsibility for the power they wield?

Unfortunately the answer is: not enough. Research from Ranking Digital Rights’ 2018 Corporate Accountability Index, released this week, reveals the ways in which companies have failed to be transparent enough about their policies that affect users’ freedom of expression and privacy.

Companies are opaque about how they handle users’ information and police content on their platforms

Results from Ranking Digital Rights’ 2018 Corporate Accountability Index shows that companies lack transparency around two key issues where they hold enormous power: how they handle user information, and how they enforce the rules that determine what information or content users can access and share.

Most companies ranked in the Index, including Facebook, Google, and Twitter, fail to clearly disclose all the types of user information they collect, who they share it with, and for how long they retain it. This includes not just the information that the user knowingly gives to the platform, like account information, but also a vast array of other types of information companies collect about users, such as their location information, browsing history, login activities, etc., and data that the company infers about people, all of which enables them to create detailed data profiles about their users.

This lack of transparency not only creates substantial privacy risks for users, but also makes it more difficult for users to understand how much information companies really hold about them. Many argue that Facebook’s data collection practices are justified because the company provides users with a compelling social platform for free. But users’ data generates enormous wealth for these companies, and users need to be able to understand the value they bring to these platforms. For many users who learn just how much data the company collects about them, they may decide that this is not necessarily a fair trade.

Similar risks to users’ rights exist in companies’ lack of transparency around their content moderation policies. Internet companies, particularly social media platforms, have in many ways become the new public sphere. Their platforms are where people go in the digital age to find information, share ideas, and learn about current events. Unfortunately, how companies govern the content on their platforms is still unclear. While most internet companies disclose something about what content or activities they prohibit, fewer disclose clear information about what processes they use to identify offenses on their platforms, and most do not disclose any data about the volume and nature of actions they took to enforce these rules. Companies’ efforts to police and manage content lack accountability without greater transparency.

These risks to users are compounded by the fact that these companies have such large user bases, and there is a clear, urgent need for companies to seriously grapple with the potential human rights risks posed by their business decisions. Currently there are few potent mechanisms for holding these companies accountable. A lack of accountability makes it all the more likely that rights-related crises will go unnoticed or will be ignored until they become a public relations problem and the damage for many users is already done.

Importantly, we have seen some significant changes towards greater transparency in the past week alone. Facebook finally published its content moderation guidelines, giving greater insight and more nuanced details into how the company decides what kinds of content it does and does not allow and why. YouTube became the first company to publish a terms of service enforcement transparency report with detailed data about the number of actions it took in a given period to enforce its rules (i.e. remove content from its platform).

These changes come after years of sustained efforts and pressure from digital rights groups, activists, and users affected by the companies’ policies. These successes show how important it is to continue to push for greater transparency on key emerging areas where companies’ platform governance is influencing human rights. Regulation can be a useful tool when properly applied. But for the complex challenges posed by today’s digital giants, we also need companies to meet their human rights obligations and disclose enough information about their policies so that stakeholders can have an informed debate and push for better policies from companies and create regulations that will hold them accountable.

Laura Reed is Senior Research Analyst and Coordinator with the Ranking Digital Rights project, responsible for conducting research for the Corporate Accountability Index, helping develop Index methodology, and coordinating the work of RDR’s external researchers. Before joining RDR, Laura conducted research on the intersection of human rights and information technology for several research institutes. As a research analyst with Data & Society, she wrote about the role of algorithms in shaping digital media and the public sphere. Previously, Laura was a research analyst for Freedom on the Net, Freedom House’s annual index of global internet freedom. In this capacity she collaborated with local researchers around the world to assess internet users’ access to technology, freedom of expression, and right to privacy. Prior to working at Freedom House, Laura was a research intern at the International Center for Transitional Justice. She graduated with a masters degree in Human Rights from Columbia University with a focus on media and transitional justice. Laura currently lives in New York.

--

--

Center for Media, Data and Society
The CMDS Blog

Research center for the study of media, communication, and information policy and its impact on society and practice. https://cmds.ceu.edu/