Trolling as a Service in 2022

Vadim Berman
Tisane Labs
Published in
6 min readDec 5, 2022

In 2018, New York Times published a detailed expose with much technical detail about Devumi, a now-defunct US company that was operating Twitter bots using stolen identities. The complex bot infrastructure Devumi was selling was not built by Devumi alone. They used other services like YTBot and Peakerr.

A multi-level ecosystem takes a while to mature. If Devumi, with its niche service providers, was operating full-steam in 2018, then the first commercial bot factories had to have emerged years earlier. As social forensics researcher Geoff Golberg was surprised to discover, already in 2016 follower sales were significant. Counterintuitively, the target and the customer may not be the same party: that is, someone else could buy followers for your account to make you look bad. It’s not like the service is expensive: Devumi was charging US$17 for 1,000 followers.

Now and Then

Devumi no longer exists. It was sued and closed down by the US government for social media fraud. Its suppliers Peakerr and YTBot, registered elsewhere, are still alive and well, complete with SSL certificates and facilities to receive payments by credit cards.

The popular perception of troll factories is that of Internet Research Agency, the infamous Russian company engaged in influence operations and linked to Yevgeny Prigozhin. According to the research of Fontanka.ru in March 2022 (source in Russian), structures linked to Prigozhin, launched a hiring campaign offering salaries of ~45,000 RUB (over 700 USD). But is there a pragmatic reason to do that today?

In 2022, social media manipulation services (or, trolling for hire) are a commodity. Even “bots for Putin” may not be Russian, nor even full-time paid posters. A campaign can be booked even by an individual, in many languages, and many countries.

We’re not talking about the early steps of likes, retweets, and subscriptions, like in 2018, or managing call centre-like environments with professional trolls. Today’s “TrollOps” involve a wide range of services, natural language processing, distributed processing, hybrid operation, crowdsourcing in a Mechanical Turk-like fashion, social engineering, and stealth.

Low-tech solutions combined with social engineering usually prove the most effective, both from the engagement perspective and the perspective of hosting expenses. A post saying “Fake news” in the comment section of Flipboard is a standard opening move, the King’s Pawn Game of trolling. Someone from the opposite political camp will likely be provoked. Does the article merely cite another article? Maybe it discusses a court decision that can’t be fake news? So what, a stupid remark is even more likely to generate a negative response — and that’s all they want!

How do they know where to post it? One way is to detect a combo of negative sentiment + mention of a particular political figure. Or, with the current partisan political environment, it could be assumed that a particular publication will be negative towards that figure most of the time or all the time, and skip the sentiment analysis altogether.

Today, several off-the-shelf platforms (sold at least to law enforcement agencies) allow managing fake persona, choreographing hardware fingerprint (SIM cards + devices) and generating text content GPT-style based on predefined profiles (e.g. radicals of a certain type). I had a chat with one of the vendors at a law enforcement trade event. I was told, “it’s a headache to manage even two sock puppet accounts. With our platform, you can manage tens of them, and generate content effortlessly”. My impression is that the TrollOps vendors still do not have access to these platforms, but it’s just a matter of time until they gain these capabilities.

Even today, the pricing is dangerously affordable.

A cyber-bullying campaign in Indonesia will set you back 20K IDR (~US$1.33) on Instagram, 15K IDR (US$1) on Ask.fm, or 10K IDR (US$0.67) on Twitter / Facebook.

In the exUSSR countries, these services are often associated with the Russian term nakrutka (накрутка). The term can be roughly translated as “artificial boost” (or “astroturfing”). These services started as a simple boost of followers and likes, and grew into more comprehensive offerings, like comments in various languages, sometimes employing a mix of bots and real users. Likes and subscriber bots are charged around $0.25 for a thousand; comments are more expensive, between $5 to $80 per thousand. The cheapest ones are those prepared by the customer, the most expensive ones are comments from “real users” in a specific country.

What are these “real users”?

Both moderators and ordinary social media users today know to spot paid posters based on some characteristics like focus on the same topic or the date when they signed up. “Pens for hire” with a mixed focus on crypto and a controversy du jour (vaccines, politics, etc.), especially not directly relevant to them are taken seriously by less and less people. Which is when TrollOps operators resort to hiring legitimate users with a long history of posting, as shown on the screenshot below:

The trade-off is higher prices, lower margins, and slower engagement. As a compromise, hybrid services are offered.

Who’s Bribing the Watchers

What about the other side of the equation? Is there an equivalent of dirty cops in content moderation?

There is. Moderators are bribed for two purposes:

  1. To ignore a violation.
  2. To ban or dox a user.

Overall, given the anonymity of the moderators, quality control, and chain of command, it is not easy to accomplish, except when the moderation team is small and entirely in-house. But it does happen.

A trust & safety team in one of the largest marketplaces in APAC shared that scammers were trying to bribe the moderators to clear their posts. (Since we were having that conversation, obviously, it failed.)

In another instance that made it to the news, Instagram moderators were offered by the Iranian intelligence officials as much as 5,000 to 10,000 euro to remove certain accounts.

Given the fluidity of the manpower involved, I personally wonder if it ever worked, but apparently, it happened.

Who Pays for TrollOps

In simple words: different parties. These services are affordable enough for an average person or a small business. Of course, political factions employ them, too.

We found the comment below in a review of one of the TrollOps vendors.

In another well-publicized case, Twitter and Facebook removed thousands of fake users created by a marketing firm called Smaat.

Meaning:

  1. TrollOps are used by political candidates. (Duh!)
  2. The results may be hit and miss.
  3. There is no reason for the beneficiaries to pay the TrollOps providers directly. Instead, a PR agency gets a wide mandate (possibly, with vague terms) and does whatever they deem effective. The candidates themselves may know or guess or possibly just pay for an all-inclusive package without too many details. In other words, plausible deniability.

Countering TrollOps

Technological solutions can only get us that far. Simple checks when the user signed up, and maybe shared characteristics of users posting within the same time period can give some cues. Automatically generated comments may be tracked using some machine learning tools. Identical or similar posts posted by different users can be detected.

But we saw that regular users can be bribed and become part-time trolls. Posts can be timed and slightly altered. And even if we manage to get rid of the disposable sock puppet accounts at scale, the next stage could be an equivalent of crypto mixing services to grant legitimacy to the sock puppets.

The legal avenue, however, remains underexplored. Regardless of one’s political affiliation, is there a legitimate reason for these services to exist?

An argument can be made that these companies are set up with the purpose to defraud, and often inflict actual damage. In less extreme cases, they defraud the social media operators by inflating the worth of user accounts.

In some countries these companies are already outlawed. After all, Devumi was shut down. However, it is not at all universal. In fact, some of these providers are companies registered in the EU. Some were even financed by EU venture capitalists. (Possibly going around start-up trade shows talking about overcoming adversity and making the world a better place.)

They all have working SSL connections with valid certificates and may process popular means of payment. Many boast that they employ legal methods only, and it could be true.

Should it stay legal?

--

--