Quick thoughts on #AskCostolo and Twitter’s abuse problem

Yesterday, CNBC had Twitter CEO Dick Costolo on air to answer questions submitted via #AskCostolo. Costolo quickly discussed $TWTR stock and Twitter’s monthly active user data, as expected since Q2 earnings were released right before the segment. Then, suddenly, the Q&A was over, without questions from #AskCostolo being addressed in any meaningful way, despite thousands of tweets, including mine, posing legitimate questions about abuse and harassment on the platform.

The closest Costolo got to a real discussion of user safety was mentioning the existence of the Support team and touting two-factor authentication. Two-factor is an incredibly important security measure, but it has nothing to do with the concerns voiced with #AskCostolo. I’m not worried about trolls trying to hijack my account; I’m worried about potentially dangerous stalkers threatening my life. And I’m not alone. The flood of concerned tweets sent leading up to and during the Q&A—from both men and women, real people who deserve more respect than being reduced to “Twitter feminists” or “Twitter activists”—show that this is a major issue. Twitter’s ongoing failure to take it seriously is unethical and irresponsible. Twitter has a harassment policy and a reporting structure in place, of course. The problem is both are completely ineffective. As stated in the TOS ‘Twitter Rules’: “Violence and Threats: You may not publish or post direct, specific threats of violence against others.” But we know from experience that this exact thing happens every single day, with little or no response from Twitter Support.

There are obviously serious societal problems with sexism and racism at work here, and there is no technological solution that will end online abuse. But there are technical improvements Twitter could try that may greatly decrease the instances of harassment. First, we need block/report features that actually work. In my experience—and judging by the #AskCostolo tweets, it seems this is happening for a lot of people—blocking is buggy, and blocks that do stick eventually expire, allowing the abuse to continue at a later date. Second, users need more abuse-classification options when submitting reports—including the ability to report harassment directed at other users. Twitter needs to expand its Support team to expedite the investigation process. The last time I filed an abuse report, it took two months to hear back. Two months. Beyond the painfully slow response time, Twitter’s Support team failed to take action. After waiting two months, I received a request for more info, and a statement that if I failed to provide further documentation, they would drop my case. That’s completely unacceptable, particularly when handling serious reports of rape and death threats. The lag is so bad that I often skip the report and flag abusive tweets as spam just to make them go away. Beyond technical fixes to the abuse-reporting structure, IP banning was suggested repeatedly on #AskCostolo. IP banning probably wouldn’t curb determined harassers because of the availability of anonymizing services like Tor. I’m also a bit uncomfortable with empowering a private company to be the arbiter of acceptable speech. But I want control over my own Twitter experience. If your Twitter experience included angry anonymous strangers hurling insults at you every time you voiced your opinion, how long would you stick around? What about rape and death threats? This is the reality for a lot of women, people of color, and other marginalized groups on Twitter. It’s terrifying and exhausting and for users unable or unwilling to stomach the rampant abuse, the only solution is to leave the platform.

Robust filtering and muting options would make a world of difference on Twitter. Using key terms and sentiment analysis to prevent people you don’t want to interact with from appearing in your mentions and timeline seems like a good place to start. This could extend to blocking abusive users from tagging you in tweets, and banning new ‘egg’ accounts—often created for the specific purpose of harassment—from being able to contact you. With sentiment analysis, you could set the tone of the tweets appearing in your feed. Ideally, those controls would rest with individual users, rather than a blanket threshold set by Twitter.

To be clear, I don’t think the problem is that the Support team doesn’t care about these issues. I imagine they’re probably woefully understaffed and struggling with a lack of resources since Support doesn’t seem to be prioritized by the company’s leadership. And why might that be? It likely has something to do with the fact that most of Twitter’s top positions are held by people who look a lot like Costolo—the company’s leadership and technical staff are mostly made up of white men who probably haven’t had the experience of being inundated with horrifying, violent threats against them. (And, for the record, diversity questions tweeted with #AskCostolo were ignored too.)

This is an extremely complex problem, and I don’t pretend to have all the answers. And I think it’s okay for Costolo and Twitter to admit they don’t either. But by refusing to address these concerns publicly, Costolo failed his users. If these discussions are happening internally, Costolo needs to open up that conversation to include the people and communities affected by these issues. A little transparency would go a long way to making Twitter a safer platform for everyone.

This post originally appeared on my personal site, lainnafader.com.

--

--

Lainna Fader
TheLi.st @ Medium

Engagement Editor at New York Magazine. Ex-Newsweek, Forbes, and Wired.