The digital dark side
The advent of digital media promised so much.
Savvy shoppers would no longer be surreptitiously swayed by advertising alone. The internet would enable them to research products. Social media would allow them to share their experiences. And ecommence would provide the power to compare prices.
Empowered by these tools of transparency, consumers would expose poor products and endorse better brands. Over time, omnipotent brands would cede power to omniscient consumers. With perfect information, shoppers would make perfect purchases. They would find the best possible products at the best possible prices. And with a smartphone in their pocket, they would do it anywhere, any time.
Or at least this was the promise. Unfortunately, this pledge is increasingly looking like a pipe dream. A far-flung fiction. An unattainable utopia.
This is the story of the digital dark side. Told in five parts.
In September 2016, Apple unveiled a slew of high-profile products. But it was the AirPods that stole the show. The coveted £159 headphones quickly became an early-adopter favourite and Apple struggled to keep up with demand.
This popularity did not go unnoticed. Before long, eagle-eyed Chinese manufacturers fired up their factories and began flooding western marketplaces with copycat products. At the time of writing, an Amazon search for ‘AirPods’ returns product listings from companies named Gejin, KeVis and YonBii. These listings have polished product renders and technical descriptions. The only difference is the sub £35 price.
These three, however, are just the tip of the iceberg. My search query returned over 5000 results. Clearly replica electronics are big business. Big enough, in fact, that Apple has taken note.
“Apple employs teams of specialists who are constantly investigating points of sale around the world, and working with resellers, e-commerce sites, and law enforcement to remove counterfeit products from the market.”
But it’s not just the world’s most valuable company that struggles with counterfeit competitors. For years Amazon was one of the USA’s largest distributors of Birkenstocks. But in July 2016 the company’s CEO removed all stock from the site due to the retailer’s inability to curtail counterfeits. In a memo obtained by CNBC, the chief executive made his position clear:
“The Amazon marketplace, which operates as an ‘open market’, creates an environment where we experience unacceptable business practices which we believe jeopardize our brand. Policing this activity internally and in partnership with Amazon.com has proven impossible.”
And even small companies are not immune.
Take Yekutiel Sherman, the Israeli entrepreneur who spent 12 months developing a smartphone case that unfolded into a selfie stick. To fund his idea Yekutiel launched a Kickstarter campaign but within a week of the campaign going live, Yekuteil found products on AliExpress with the same design and even the same name.
It seems no matter how big your business — from a crowdfunded selfie stick to the world’s most valuable company — fake products are a threat.
Of course, counterfeit products are nothing new. They have always existed in one form or another. But they have never existed on such an industrialised scale. They have never extended into so many niches. And they have never been sold in the same stores as their authentic inspiration.
Fortunately, modern retailers provide the perfect tool for shoppers to judge the quality of each product listing.
Or at least that’s what we were promised.
For years, Amazon allowed brands to elicit consumer reviews as long as the incentive was a free or discounted product and the review was clearly labelled. This changed in October 2016, when an Amazon blog post announced that all incentivised reviews now had to be managed by the Amazon Vine program. Frustrated by the loss of control, marketers sought alternative means.
Before long a vast black-market had emerged. In a revealing exposé for Buzzfeed News, Nicole Nguyen forged connections with members of this underground industry. Her piece indicated that the shady business is organised into two tiers. The bottom tier is informally comprised of an array of shady Facebook groups:
“Type in ‘Amazon Review’ into Facebook’s search box, and more than a hundred groups will pop up. Two of the more popular groups, Amazon Review Club and US — Amazon Review Club, which had 69,000 and 60,000 members, respectively, were recently shut down, but many more groups remain, with tens of thousands of members apiece.”
The groups act like a dark reflection of Amazon itself; each its own marketplace connecting marketers seeking five-star reviews with consumers seeking payment. To understand their inner workings Renee DiResta, policy lead at Data for Democracy, began joining these covert communities:
“DiResta’s first act in the groups was to write “interested” next to a post describing a pair of Bluetooth headphones for $35.99. Almost immediately, a Facebook user purportedly named SC Li sent her a direct message, calling her “dear” and asking for a link to her Amazon profile. If she reviewed the headphones, SC Li said, he would reimburse her via her PayPal account.
Within an hour of getting SC Li’s message, DiResta got a slew of direct messages from other sellers, asking her to review tea lights, containers, shower caddies, badge holders, sanding discs, rain ponchos, pocket-size vanity mirrors and butterfly knives. The messages came in so quickly, she said, she barely had time to respond.”
And this is just the bottom tier.
The most industrious participants recruit other members into more secretive, closed communities hosted on private Discord servers and invite-only Slack channels. Once the group is populated, the organisers approach susceptible brands and broker deals directly. The moderator then passes instructions on to the channel’s members and takes a 20–25% commission from every user’s review payment.
Between these two tiers the number of black-market reviews created is overwhelming. The analytics firm ReviewMeta has created an algorithm that analyses reviews and finds imposters. When the company processed its database of 58.5m Amazon reviews, 9.1% were labelled as ‘unnatural’. In particularly questionable categories, that figure rises sharply, for example 51%, 56% and 67% of reviews were questionable for bluetooth headphones, weight-loss pills and testosterone boosters respectively.
Despite being one of the most relied upon tools in a shopper’s arsenal, it’s increasingly clear that reviews cannot always be trusted. Fortunately, purchasers have another, more authentic tool, in their armoury.
Or at least that is what we were promised.
As well as reviews, consumers use influencer recommendations when evaluating products. In fact, 92% of consumers say they trust influencers more than advertising. I believe this is about to change. Over the past few years we’ve witnessed a shift away from trustworthy influencers and a shift towards superficial ones.
Take Elijah Oyefeso, an influencer who claims to be the founding member of the #richkidsofinstagram clique. According to a Daily Mail profile, Oyefeso earns £30k per month by investing on the stock market. On social media he posts images of his luxury lifestyle: £250m cars, private jets and £5k-per-month apartments.
Inspired by his success, Oyefeso has amassed a vast audience who follow him in the hope of learning the tricks of the trade. Oyefeso sends every new follower a direct message linking to a trading platform and the opportunity to earn £400 for half-an-hour’s work.
As good as all this sounds, there are two fundamental problems.
Firstly, Oyefeso’s advice is bogus. The Guardian found that the platforms pay Oyefeso £40-£80 for each user he refers while the average recruit loses 80% of their investment. He’s not an investment banker, he’s an affiliate marketer.
Secondly, Oyefeso’s entire public personae is a sham. He doesn’t buy the cars. He doesn’t travel on private jets. And he doesn’t rent luxury penthouses. It’s all a front. Oyefeso is not a #richkid at all. In fact, he’s broke.
Despite intense activity across his social media pages since September, Oyefeso has spent some of this time in jail where he has been detained on counts of dangerous driving and possession of a weapon after he ran over a friend to whom he owed money. During his trial the judge remarked:
“You portrayed yourself as a very successful trader within the financial market. Clearly this is not the case.”
Later in the case Oyefeso’s own lawyer confirmed the judge’s suspicions:
“Oyefeso makes a number of claims about his wealth but I have seen no evidence of this. (…) Clearly if he had this money he could have written a cheque to the victim.”
Furthermore, the investigation dug into Oyefeso’s financial history. The only company registered to the influencer’s address was IWANTTOTRADE Ltd which was dissolved in 2016 without posting a penny of income.
In summary, everything about Oyefeso is a fiction. His trading career doesn’t exist. The advice he gives only profits himself and the lifestyle he displays is a luxury veneer covering a core of vapid mundanity. Unfortunately, Oyefeso is not alone.
In an attempt to expose the business of fake influencers, the agency Mediakix created two fake Instagram accounts. The first, Calibeachgirl310, featured images borrowed from a Los Angeles model, the second, Wanderingggirl, was populated with free stock photos. Within a few weeks, brands offered both of the bait accounts a combination of free products and money totalling more than $500. In Mediakix’s experiment, no original photos were taken. There was no hired cars or links to trading platforms. Nothing. But brands and followers were still duped.
Fortunately, users have a simple measure to separate the fact from the fiction.
Or at least that is what we were promised.
In their 2017 Influencer Intelligence Report the research firm GartnerL2 defines seven levels of influencer ranging from Advocates (0–5k followers) all the way up to Celebrities (7m+ followers). Implicit in this categorisation is the assumption that those with more followers also have more credibility. I believe this is increasingly invalid.
Run a quick search for “how to buy followers” and Devumi will appear among the top results. Click through to their polished page and you’ll spot the hallmarks of a trustworthy business: a US address, customer testimonials, a money-back guarantee.
But something doesn’t feel right.
Devumi claim to use “1000+ web partners” to achieve the follower increases. Separately they tout an “exclusive network of over 5m users”. Then, somewhat vaguely, they claim to partner with a “crowd of influencers”. In reality Devumi operate a stock of 3.5m social media bots. The sole purpose of this army of automated accounts is to mindlessly follow the accounts of Devumi’s mindless customers.
To better understand Devumi’s business, The New York Times set up a dummy Twitter account and paid the service $225 for 25,000 followers. These automated accounts can broadly be segmented into two groups:
“The first 10,000 or so looked like real people. They had pictures and full names, hometowns and often authentic-seeming biographies. One account looked like that of Ms. Rychly, (…) but on closer inspection, some of the details seemed off. The account names had extra letters or underscores, or easy-to-miss substitutions, like a lowercase ‘L’ in place of an uppercase ‘I’. The next 15,000 followers from Devumi were more obviously suspect: no profile pictures, and jumbles of letters, numbers and word fragments instead of names.”
It’s the first batch which worry me most. These accounts are clones of accounts held by real people. The Ms. Rychly bot is actually a duplicate of an account run by a real person. The two accounts share a name, photograph and bio-line but their behaviours couldn’t be more different. The fake account, for example, promotes Canadian real estate investments, cryptocurrencies, a Ghanaian radio station and graphic pornography. The real account belongs to a teenage girl from Minnesota.
Devumi’s stock of 55k clone accounts represents a form of digital identity theft perpetrated on an unprecedented scale. So who is paying for such a service? The New York Times’ answer will shock you.
“The Times reviewed business and court records showing that Devumi has more than 200,000 customers, including reality television stars, professional athletes, comedians, TED speakers, pastors and models.”
The list keeps going. Customers include Olympic heroes, such as James Cracknell, technology billionaires such as Michael Dell, and members of the House of Lords such as Martha Lane Fox.
And influencers are no different.
According to a study by the Points North Group many well-known brands have used influencers whose social media followings are bolstered by fake followers. For example, Ritz-Carlton’s sponsored posts have been published by influencers with an astonishing 79% fake following. Aquaphor influencers have a fake following of 52% and L’Occitane ambassadors have a fanbase comprised of 39% fake followers.
Follower counts have been the primary way brands have ranked influencers in terms of both reach and credibility. This is no longer viable.
Fortunately, we have other measures to fall back on.
Or at least that is what we were promised.
Historically the success of social media posts has been measured by the amount of engagement they receive. The more likes, comments, shares or retweets a post garners the more successful it is seen to be. But just as there is a black market for fake followers, there is also a web of services selling automated engagement.
The anti-fraud company Sway Ops analysed a single day’s worth of Instagram posts tagged with #sponsored or #ad. Their investigation found that over 50% of all engagements were fake. More worryingly, they found only 18% of comments were made by real, human users.
It’s easy to assume the black market is fuelled by big, nefarious firms. But this couldn’t be further from the truth.
Buzzfeed News caught up with 22-year-old Kent Heckel in his small, lofted bedroom in the Potrero Hill neighbourhood of San Francisco. Next to his bed Kent keeps a Raspberry Pi running a ‘bot farm’ of 2900 automated Instagram accounts. Every few seconds, a script checks Kent’s clients’ accounts and likes them with each of the bots. Running 24 hours a day, Kent has turned a $35 computer and a few lines of code into a $12k a month business:
“It’s not just Russian bots and hackers, it’s 22-year-old kids in their dorm rooms and influencers and brands of all sizes. The damage is done on a very large level because nothing is genuine.”
And there is a second much harder to detect method of generating fake engagement. Pods are groups of around 30 Instagrammers that comment and like each other’s posts. This mutual advocacy provides the initial jump-start required to game Instagram’s algorithm. Whilst pods may not sound too controversial, there is always someone willing to push the idea to an extreme.
In February 2018, the Viral Hippo Instagram account posted a photo of a plain black square. Within 24 hours the post had racked up over 1,500 likes including engagement from verified models, influencers, fitness coaches and travel accounts. Another post of a yellow square, resulted in almost 3,000 likes. A diagram of the human sinus and an accidentally shot picture of a hubcap received 1,400 and 1,200 likes respectively.
The posts were part of an experiment run by Buzzfeed News in order to expose Fuelgram, a service that essentially operate a mass pod. Fuelgram’s users hand over their Instagram username and password, pay around $15 and start posting at designated times of the day. Each time you post, other Fuelgram users automatically like your post and your account automatically likes their’s in return. Fuelgram is effectively turning real people’s Instagram accounts into bots.
What’s perhaps most worrying is that many aspiring Instagrammers are beginning to see the use of bots and pods as a necessary evil in their bid for fame and fortune. Influencers use these tools to grow their audience and inflate their fees. In doing so they simultaneously game their audience, the social network and the brands that they work with. What was once a way for brands to assess influencers has become a way for influencers to deceive brands. The tables have turned.
It seems the promise was a pipe dream all along.
Add all this together and you get fake products, with fake reviews being promoted by fake influencers with fake followers and fake engagement.
And this isn’t even the full extent of the problem. This article could have been twice the length. I could have had chapters on fake internet traffic. Or fake YouTube views. Or fake clicks on display ads. Or fake Facebook profiles. The list goes on.
To be clear, I’m not suggesting that all brands fall foul of these five threats. Nor am I claiming that all online activity resides in this digital dark side. In fact, I believe the vast majority of activity is genuine and well-intentioned. But I do think there has been a shift. A once small issue at the fringe of the digital ecosystem is bleeding into the middle and creeping into mainstream.
The good news is that there is hope. Keith Weed, Unilever’s CMO, recently made a stand against such “dishonest practices”, Facebook have deleted 1.3b fake accounts in the last 6 months and Twitter have revoked Devumi’s API access.
But one-off actions will not solve the problem. We need bigger, ongoing, collective vigilance. The retail marketplaces must do better in controlling fake products and fake reviews. Agencies must do better in vetting the influencers they recommend. Social networks must do better in governing the use of fake followers and fake engagement. And ultimately brands must hold everyone to inordinately higher standards.
Only then will we begin to dismantle the digital dark side. Only then will we turn the rip-off into the real, the fake into the factual and the corrupt into the credible.
Only then will we realise the true promise of digital media.
- Samuel Scott was kind enough to link to this article in his piece ‘If there’s a recession in 2019, here’s what marketers should do’ for The Drum. Thank you Samuel.