Big Tech’s Accountability Problem
Twitter is well aware of the manipulation that takes place across their platform, yet continually refuses to address it
Welcome To The Circus (We’ve Got Bots & Sockpuppets!)
Yoel Roth is Twitter’s Head of Site Integrity. I can objectively say that he is failing. Miserably, in fact.
As a direct result of Yoel’s inability to effectively do his job, I have received numerous death threats, I have been doxed, and my Twitter account is currently restricted (such that I can only use Twitter to DM with accounts which follow mine):
The reason my account is restricted is because I was mass reported by inauthentic Iranian-focused accounts:
Threats, unfortunately, are a regular occurrence given the nature of my work. I have opted to spend the past 1.5 years researching social media manipulation — developing techniques and tools to identify information operations.
In the beginning I would report some of the threats to Twitter; I gave up on that, however, after they dismissed a death threat I received from the XRP Army as not being in “violation of the Twitter Rules against abusive behavior:”
Being Silenced (The First Time)
While my account is in this “temporarily limited” state, Twitter has essentially stripped me of having a (public) voice on their platform.
I was similarly silenced by Twitter back in April 2017, when I was no-platformed by Twitter-owned Periscope (yes, I was no-platformed before no-platforming became en vogue).
Twitter didn’t like that I blew the whistle on pedophile networks which are rampant across their live video ecosystem — so they opted to remove me, rather than devoting proper resources to content moderation:
Last Saturday (May 25th) the same group of Iranian-focused accounts, which, for years, have engaged in coordinated inauthentic behavior, launched a propaganda attack targeting the National Iranian American Council (NIAC):
Below is a network graph representation of 6,638 Twitter accounts that tweeted, retweeted, or were mentioned in tweets which include #NIACLobbies4Mullahs (source data: 95,108 tweets between May 25th, 2019 and May 26th, 2019):
NOTE: I am not asserting that every account in the graph is inauthentic (please spare me the “your methodology is wrong, Trump isn’t a bot!” tweets) — this simply maps the relationships between accounts in the dataset
Below are 20 accounts that each retweeted tweets which include #NIACLobbies4Mullahs hundreds of times:
There was a large spike in tweet volume from @jumpAvocado on May 25th (477 tweets), when the propaganda attack targeting NIAC took place:
One of the accounts, @paiz_zana (userid: 799637272412954625), has since changed their username. Cycling through usernames is a tactic which is frequently employed by nefarious actors (this is done to make detection more difficult).
Out of “zAnA’s” most recent 598 tweets, 212 of them (35.5%) include #NIACLobbies4Mullahs.
Here’s a summary of the accounts which most actively amplified (i.e. retweeted) the #NIACLobbies4Mullahs hashtag:
By gaming Twitter’s platform to make #NIACLobbies4Mullahs trend (via coordinated inauthentic behavior), the group seeks to create the illusion of a larger support base than is reality. The vast majority of engagement (retweets/likes) on the below tweets are driven by inauthentic accounts:
Astroturfing: the deceptive tactic of simulating grassroots support for a product, cause, etc., undertaken by people or organizations with an interest in shaping public opinion
The above hashtag is virtually the same as the one we looked into earlier — with the exception being that this one begins with NAIC (typo) rather than NIAC.
Here’s the first tweet I found using the erroneous hashtag:
Sometimes these trolls can be funny (they even gave me a verified badge in the below image.. which is an upgrade vs. my actual Twitter account!):
One of the earliest authentic accounts which tweeted the (typo) hashtag was Mariam Memarsadeghi:
Scanning through the accounts which tweeted the erroneous hashtag (and those which engaged with said tweets), it’s clear the vast majority are inauthentic.
I personally find it difficult to trust Big Tech given my experiences with Twitter, across multiple fronts. There is a large (and growing) contingency that also feels Big Tech lacks transparency, proper communication, and accountability.
Specific to the pedophile networks which are rampant across Twitter’s live video ecosystem, Twitter told BBC in July 2017 that they have “zero tolerance” for this kind of conduct:
Periscope’s comment moderation feature at that time actually facilitated sexual exploitation of children (brought to the attention of Twitter by myself and many others):
Meanwhile, Twitter was touting their “strong content moderation policy” and reporting system (the reporting system was also a joke).
Ultimately, it’s very easy for Big Tech to make statements which don’t align with their actions/reality. Moreover, Big Tech can opaquely and unaccountably remove people from their platforms, just like Twitter did to me in April 2017 (and after I had invested significant amounts of time/money to build an audience of 35K followers).
Fast forward to today, where Twitter generally ignores (blatant) platform manipulation that often involves researchers/academics spoon-feeding them data/evidence.
Twitter does engage with a select group of “Official Partners” and a handful of individuals. If you don’t fit that bill, however, and you bring substantive information to Twitter with respect to platform manipulation/influence operations, chances are you’ll be ignored.
As was the case with Twitter’s public statements re: pedophiles grooming children across their live video ecosystem, Twitter similarly opts for virtue signaling over substance, stating they “proactively identify suspicious account behaviors that indicate automated activity or violations of [their] policies..”
That’s what they say, at least:
In practice, Twitter is nothing but reactive in this regard. Moreover, they are entirely complicit and negligent.
So, How Do We Fix Things?
Given the inherent conflict of interest (properly disposing of inauthentic/inactive accounts will devastate platform metrics!), it is my position that Big Tech companies shouldn’t be allowed to self-police:
In the context of Twitter (who is more open with their API than others), it is possible to audit their work.
For example, I am certain that 90%+ of the nodes/accounts in the below graphics should be suspended for engaging in coordinated inauthentic behavior (Twitter has opted to remove some of the amplifying accounts while allowing the main benefactors of said amplification to remain active):
I am being intentionally vague about these graphs as I will do a separate post which will deep-dive the accounts (most of which are connected to MEK trolls + astroturfing).
What I would like to call attention to is that the same accounts I have highlighted above (namely, @no2censorship and @peymaneh123) were identified over a year ago in a peer-reviewed academic journal:
Twitter: please start doing your job more effectively. From what I have seen, it doesn’t appear that proactively mitigating against information operations/platform manipulation is a priority
Geoff Golberg is an NYC-based researcher (and entrepreneur) who is fascinated by graph visualization/network analysis — more specifically, when applied to social networks and blockchain activity. His experience spans structured finance, ad tech, and digital marketing/customer acquisition, both at startups and public companies. Geoff spends (far too much of) his time developing techniques and building tools to identify social media manipulation (of various flavors!)