Graph based privacy for Twitter communities
Twitter offers its users two options: take a profile private and choke off social sharing (defeating the purpose of being on the platform IMO), or allow anyone to tweet and follow. I think Twitter should open up a new option: graph based privacy.
- Create a rule that restricts the ability of anyone using the racist hashtag #ISaluteWhitePeople to tweet to me, or reply to me.
- Create ‘rules’ and combinations of rules based on patterns above — “Follows X & Y, retweets P & Q” — and map that to permissions for all the major actions on Twitter.
- Allow graphs I trust to communicate with me. Let’s say that I follow Jane. Jane follows Sally. I’m happy to err on the side of allowing Sally to follow me and tweet to me without needing approval. Depending on how restrictive a user wants to be, they could restrict viewing, retweeting and quoting tweets in the same way.
- Assist human efforts with classifier algos.
- Offer hard rules to ban bots with equal follower/followed ratios, accounts with less than X followers, and accounts less than X days old. Stack overflow uses its social graph to restrict abuse to great effect.
“Sorry! You can’t tweet to this person until you’ve got 200 followers!” Means you have to work to build trust in a network I trust before you can talk to me. For some users, this is the right level of protection.
Possible Questions & Concerns
- Won’t this increase the filter bubble? I have no idea. I’m not sure it can get worse. Safety first, everyone?
- Won’t this fragment Twitter, and lead to lots of ‘tweet not shown’? Not in every implementation. I have a feeling that people will overwhelmingly allow their tweets to be seen by everyone and probably favorited and retweeted too. I think read-only public and graph based interaction is the solution.
- What about false positives? Yes, it will happen. But just like private profiles, someone can become an ‘approved’ follower. That feeds back into the rules and benefits everyone.
- Will this be confusing? To normal users, yes. A powerful, correct abstraction frequently requires an investment in time and understanding to use properly. I think the way around this is to have shareable privacy templates that people can really invest in keeping up, or to make this a collective effort. Allow communities of people to define their boundaries. Personally, I would be happy to defer to others if what they are doing is visible and can be overridden.
- No, seriously, isn’t this basically Google+ circles? Wasn’t this idea a confusing UX failure? It wasn’t the concept that was bad. I think if they had just said ‘tags’ rather than ‘circles’ everyone would have understood Google+. You ‘tag’ someone as friend or family, then add tags to posts. The spacial concept of being ‘in’ a circle made this needlessly abstract and complicated. Facebook calls the same thing ‘custom lists’, which is also a bad abstraction — it should be users who have tags, which can be applied at the top of their profile by you and are seen only by you. There’s no need for the extra concept of a ‘list’ or ‘circle’. Ie., rather than ‘list’ has ‘user’ → ‘user’ has ‘tag’. This is what’s been missing.
- Are we really ready to label people en masse? Won’t we see racist and hateful tags? Maybe, yeah.
- Didn’t Twitter already accomplish this with shared block lists? No. Exporting a CSV of accounts doesn’t automatically protect against new accounts cropping up constantly. It’s painful to see outside services like https://blocktogether.org/ trying to pick up the slack when Twitter’s core users are screaming about this.
In short, this is a graph scale problem and requires a graph scale solution. Focusing on actions individuals can take is fundamentally misguided and hurting the platform.