Fake accounts, fraud and dystopias

Enrique Dans
Enrique Dans
Published in
5 min readAug 28, 2013

--

As a result of an interview for Spanish television, I have been investigating the phenomenon of fake accounts on social networks, and particularly of fakeTwitter accounts, a subject that has been in the news recently as a result of a particular study on the way these black markets work. The extent of these kinds of practices is surprising, but even more so is the parallel race between those who create fraudulent accounts and those who try to track them down, along with the way these kind of services are used.

A team of researchers at Berkeley has been working with Twitter to get more details about these kinds of practices. They spent five thousand dollars on 120,000 fraudulent accounts from 27 different suppliers over 10 months, at a cost of between ten dollars and 200 dollars for each thousand accounts, all overseen and approved by Twitter. A look at some of the web pages of these suppliers is illuminating: they offer anything from simple systems that come with followers without any kind of filter, without a photograph, and with the default blue background, to followers from a particular country or in a particular language, with a photograph, a brief biography, with a different, “personalized” background, or with a certain level of consistent activity in the form of updates or retweets. You can buy accounts, followers, and even services that increase your account of followers on a daily basis. You can buy from dark rooms, or from suppliers that offer a range of safeguards, guarantees and even post-sales service.

The majority of the services that try to calculate a percentage of fraudulent followers for a particular Twitter account are extremely simplistic, using basic metrics such as activity or the ratio of followers to followed. Applying one of these supposed “monitors” to an account shows that users who have never ever bought followers still obtain percentages of between 10 percent and 20 percent, a procedure as absurd as saying that any account with a certain level of inactivity is a robot, or that only uses Twitter to follow others and not to create its own content.

Interpreting the results of these kinds of tests as trustworthy is highly simplistic, and proof that the phenomenon has not been understood at all, because in reality, identifying a false account requires a complex calculation of heuristics that include many variables. For example, tracking the IP address from which the account was created to try to see if an abnormal number of accounts have been created from it — the reason why many of these providers use botnets to open accounts from zombie computers — or variables such as the level of personalization of the account, the ratio between followers and followed, the publication guidelines, and the type of content being distributed. Furthermore, patterns obtained from the observation of a large number of accounts are also used, such as recognizable patterns in usernames, or in the email account used, repeated activity, wave-patterned activity, etc.

At the same time, the providers of these types of fake accounts also try to overcome identification mechanisms: in the case of Twitter, accounts with verified email accounts are very expensive. On Facebook, for example, the price difference between a thousand accounts not verified by phone and those that are can be anything between 400 dollars and 1,800 dollars. The asymmetric nature of the internet means that on many occasions, it is worth “maturing” accounts, from those that follow many users, and even those that follow each other, giving rise to networks that are very difficult to disentangle unless an aggregated image can be created.

What is behind this black market in fraudulent accounts? Aside from criminal activities such as creating accounts to send links to malware or spam, the main use is create or boost a level of relevance for reasons that go from personal vanity to corporate objectives. In many cases, we are talking about people who want to appear important, and reach a wide public: politicians, actors, etc. It is also possible to buy followers for a rival, only to expose them as a fraud later on.

The same approach can be used on YouTube, where the more viewers see a video determines its popularity. There are companies that sell five thousand “hits” for between 10 and 15 dollars.Klout establishes relevance indicators not just for a particular account, but also the response level generated between those that follow it based on variables such as replies, retweets, or favorites. It’s a big market for which there seems to be big demand.

Can the use of buying followers be justified? In principle, the mere mention of resorting to a system of this type by any type of agency, assessor, or “expert”, should be enough for us to stop using their services, never to do so again. Although an increase in followers can, in theory serve to increase the visibility of a recently created account or to spark the flame in certain viral situations, the use of such practices is not only relatively easy to detect and expose, but will likely backfire, and is not sustainable: a short cut that will likely prove costly, and end up making those behind it look like wise guys.

Just because a lot of people cannot detect the fraud, doesn’t make the fraud right, and should be shown up for what it is. That said, the false followers market doesn’t seem to be on the decline; and if anything, is showing signs of becoming a stable market, is proving resistant to detection by social networks, and clearly profitable. Will we see the day when robots with sophisticated programs develop fictitious identities that imitate real people, with activities that follow their own logic?

Just imagine an army of false profiles with virtual lives available to the highest bidder, and that are very difficult to differentiate from a real person? The idea is an intriguing one, worthy of a novel by Philip K. Dick adapted for the virtual world: in many ways, the development of heuristics used to differentiate real from fake accounts takes us one step closer to the el Voight-Kampff test used in Blade Runner to distinguish between people and robots.

Are we really heading toward a future populated by social “replicants” who can set trends and viral phenomenon on the basis of commercial interests so that certain groups of humans can follow them like sheep?

--

--

Enrique Dans
Enrique Dans

Professor of Innovation at IE Business School and blogger (in English here and in Spanish at enriquedans.com)