Launching today: new collaborative study to diminish abuse on Twitter

With multiple methods, online abuse can be greatly diminished — if we also discover which methods actually work in which situations.

Susan Benesch
7 min readApr 6, 2018

by Susan Benesch and J. Nathan Matias

Online abuse has become so widespread and pervasive that it poses a serious challenge to public welfare around the world. Like other complex social problems, it requires a rigorous, multifarious search for solutions. It can’t be solved merely by trying to delete all the harmful content online (even if this were technically possible to achieve) since even such draconian policies would not prevent more content from causing harm. It’s like recalling unsafe foods without preventing new food from being contaminated and sold — or towing away crashed cars without trying to make new cars safer. Abuse can’t be solved by any single method, since it’s posted in so many forms, by a wide variety of people, and for many reasons.

With multiple methods, online abuse can be greatly diminished — if we also discover which methods actually work in which situations. Quite a few promising ideas have been floated over the internet’s 35 year history, but most of them have never been systematically tested. For lack of evidence, tech companies may be persisting in practices that worsen abuse, and may be overlooking simple ideas that could help millions of people.

Research has shown that when institutions publish rules clearly, people are more likely to follow them.

Today Twitter will begin testing such an idea: that showing an internet platform’s rules to users will improve behavior on that platform. Social norms, which are people’s beliefs about what institutions and other people consider acceptable behavior, powerfully influence what people do and don’t do. Research has shown that when institutions publish rules clearly, people are more likely to follow them. We also have early evidence from Nathan’s research with reddit communities that making policies visible can improve online behavior. In an experiment starting today, Twitter is publicizing its rules, to test whether this improves civility.

We proposed this idea to Twitter and designed an experiment to evaluate it. This project also pioneers an open, evidence-based approach to improving people’s experiences online, while protecting their privacy. Industry and university researchers have worked together before. What makes this effort unique is that we have chosen to conduct it as an open collaboration under a set of legal, ethical, and scientific constraints. This will protect Twitter users, safeguard the credibility of our work, and ensure that the knowledge gained from it will be available for anyone to use — even other internet companies.

This project also pioneers an open, evidence-based approach to improving people’s experiences online, while protecting their privacy.

We hope that this project will supply practical knowledge about preventing abuse online, and that our process will inspire further transparent, independent evaluations of many other ideas for reducing online abuse.

Research Process

As we test abuse prevention ideas, our research team is also testing a process for independent, outside evaluation of the policy and design interventions that tech companies make. Here’s how it works.

Independence

Our independence from Twitter is the basis of this process. We are academic researchers, we do not work for Twitter, and we neither sought nor received money from Twitter for this project. We set two conditions for carrying out the project in collaboration with the company: that it share the information we need to analyze the results, and that we retain freedom to publish the results in a peer-reviewed academic journal. We negotiated and signed legal agreements in which Twitter agreed to these conditions and we agreed to protect trade secrets on technical issues unrelated to our main questions.

User Privacy

To protect privacy, the company will only give us anonymized, aggregated information. Since we will not receive identifying information on any individual person or Twitter account, we cannot and will not mention anyone or their Tweets in our publications.

Privacy is central to our process, which therefore limits the data we work with, protects the data when we’re doing analysis, and restricts our use of any data to this study only. We designed the study with Twitter in a way that allows us to evaluate the experiment without needing to know the names or other personal information of any account. When the study finishes, Twitter will prepare an aggregated dataset for our analysis. Data from Twitter will remain on an encrypted system of ours where we will conduct our analysis, and we will not share the data beyond that system. After confirming the integrity of the data we receive, we will follow the agreed plan to analyze the results. Since peer review can take time, and since other researchers may ask us to double-check details, we will hold the data and any backups for two years after the study concludes, and then delete it all.

Transparency and Accountability

To be as transparent as possible, we are publicly announcing the study before it starts. Since we cannot describe it in detail now without jeopardizing the integrity of the results, we found two alternatives. First, the full study design was approved by two university ethics committees (whose task is to make sure researchers conform to rules that protect people who may be affected by research). Second, we have filed our analysis plan — a detailed description of how the study will be carried out — with a neutral third party called the Open Science Framework. This ensures that we, and Twitter, can be held accountable for exactly what we are doing.

This practice of filing the plan with a neutral third party is called pre-registration, and it means that we described in detail what our hypotheses were, exactly how Twitter would carry out the intervention, what outcomes we would observe, and what statistical methods we would use to analyze the results — all before the study started. This ensures that we do the analysis exactly as we planned before we saw the results, even if that analysis produces disappointing findings. Pre-registration, a best practice in open science, protects the credibility of both our team’s research and Twitter’s involvement. We have uploaded our analysis plan to the Open Science Framework — the neutral third party. When the study is over and we are ready to publish our results, we will make the plan public on OSF so others can confirm that we followed the plan and, just as important, can replicate our study.

We will make our findings available to anyone, so that they can be freely used. We will submit our research for academic peer review and publish it in an academic journal.

Ethics

Our process also conforms to standard university practices for research ethics. This project was reviewed and approved by all of our ethics boards: the MIT COUHES and McGill University’s REB, in coordination with the ethics committee at the University of Bath.

When designing this study, we considered asking each person for consent in advance or, alternatively, notifying people after the end of the study that they might have been involved. In this research, asking people to opt in could lead to biased findings. Post-study notification through Twitter was also impractical, since Twitter wouldn’t be able to reach people who have left the platform, and our research team wouldn’t be able to reach anyone, as we will not have any account names or contact information. In such conditions, U.S. research ethics policies require experiments to meet standards of minimal risk. We believe that the messages we are testing meet this standard, and the MIT and McGill ethics committees both approved this study. Because the consent issue is so important, some on our team are designing new ways to coordinate the consent and accountability of large-scale research. If you have questions about the ethics of this research, please contact J. Nathan Matias at jnmatias@mit.edu.

Acknowledgments

We’re deeply grateful to some brilliant colleagues who helped on this project. Lindsay Blackwell wrote a survey of relevant research in social science, which guided our thinking when designing the experiment. Marc Brackett also provided early guidance. Andrew Sellars at the BU/MIT Technology & Cyberlaw Clinic helped to draft and negotiate the collaboration agreements. We are also grateful to many people at Twitter who worked to make this project possible.

About the Researchers

This project is led by Susan Benesch and J. Nathan Matias. Journalists with questions about the project should contact us at (susan@dangerousspeech.org and jnmatias@mit.edu).

Susan Benesch (@susanbenesch) is a Faculty Associate of the Berkman Klein Center for Internet & Society at Harvard University. She founded and directs the Dangerous Speech Project, which studies speech that can inspire violence — and ways to prevent such harm, without infringing on freedom of expression. This includes extensive work on online communication, and the Dangerous Speech project is part of the Twitter Trust and Safety Council.

J. Nathan Matias (@natematias) is a postdoctoral research associate at the Princeton University departments of Psychology, Center for Information Technology Policy, and Sociology. He is a visiting scholar at the MIT Media Lab Center for Civic Media and is founder of CivilServant, which organizes citizen behavioral science for a fairer, safer, more understanding internet. CivilServant has worked directly with communities of tens of millions of people to discover effective ideas for improving online life and evaluate the impact of social technologies.

Derek Ruths is an associate professor in the School of Computer Science at McGill University where he runs the Network Dynamics Lab. In his research, Derek studies ways to use data to measure and predict large-scale human behavior.

Adam Joinson (@joinson) is Professor of Information Systems at the University of Bath, where he conducts research on the intersection between technology and behaviour — including work on communication patterns, influence, security and privacy and how design can influence behaviour.

--

--

Susan Benesch

Dangerous Speech Project against violent hatred, in favor of freedom of speech. Faculty associate @BKCHarvard