What happened to #Wayfair?
--
Tweets promoting a damaging narrative used subversive audience segmentation techniques to further spread the conspiracy theory
1. Online furniture retailer Wayfair was the subject of a damaging conspiracy theory amplified between July 10–14, 2020, which Reuters determined to be false
2. Use of the Twitter hashtag #wayfarer instead of #wayfair indicates subversive audience segmentation associated with inauthentic behavior
3. Human amplification of the narrative spread the conspiracy across platforms, while an inauthentic attempt at amplification was less effective
A widespread conspiracy theory targeting online furniture retailer Wayfair was amplified between July 10–14, 2020. The narrative claims the retailer is a front for illicit trafficking of human children (Reuters: FALSE), citing the use of first names in product titles. Abnormally high prices for specific items such as pillows, cabinets, or shelving units were also cited. More about the conspiracy theory can be found here: Insider, Rolling Stone, Buzzfeed
A tweet containing the narrative dates back to June 14, 2020. However, NBC’s Ben Collins credited a Tweet posted to Reddit’s /r/Conspiracy as the spark that pushed the narrative to the mainstream almost a month later. Afterwards, a popular tweet containing #Wayfarer was spread to Facebook via screenshot, where reposts eventually removed all attribution to the original creator of the claim.
An analysis of the memetic content containing this narrative showed us that the hashtag #Wayfarer was used to propagate the “Wayfair” conspiracy. This indicates the coordinated use of audience segmentation — “dividing people into homogeneous subgroups” — to spread the conspiracy.
By using a hashtag similar in name to the subject of the narrative, “#wayfarer”, harmful actors can segment a susceptible audience of people into a subversive niche which can’t be found when searching for “wayfair”. The accounts which end up in these audience segments can be targeted again to spread additional conspiratorial narratives. Screenshots of tweets containing #wayfarer spreading across Facebook serve as an entry point to the narrative, so users who return to Twitter and search for #wayfarer are more likely to end up in these audience segments.
We analyzed two clusters of accounts amplifying #wayfarer. The first cluster represented a human-amplified tweet by @FreeSuperhero1. The second cluster represented a network of inauthentic amplification, centered on a tweet by @AMotherinUSA.
Analysis: @FreeSuperhero1
The first network analysis of the tweet by @FreeSuperhero1 indicates the post was amplified by authentic, human accounts. Each circle represents a Twitter account, and each line between them indicates the accounts have interacted through replies, comments, and retweets. A content review of users interacting with this post indicates that the amplification is authentic, coming from genuine interaction by discrete human accounts. A lack of connections between those users suggests the network has not coordinated beforehand.
Accounts amplifying this tweet also contain features which indicate they are human. Features indicating authenticity include a face picture; uniquely identifying information in their bio, such as employment; a link to an outside page that corroborates their identity; and a post frequency consistent with authentic use rather than automation or hyperactivity. Bot Sentinel, a service which analyzes Twitter accounts for inauthentic patterns, also scored the accounts as authentic.
Analysis: @AMotherinUSA
A second network analysis of a tweet containing #wayfarer by @AMotherinUSA indicates the post was amplified by an inauthentic network of coordinated accounts. Each circle represents a Twitter account, and each line between them indicates the accounts have interacted through replies, comments, and retweets. The presence of multiple connections between accounts interacting with this post indicates that the amplification is inauthentic, as this network of accounts regularly amplifies itself in a coordinated manner.
This artificial and inauthentic cluster displays characteristics of an information operation that requires modest resources and planning. The age of many of the accounts indicate a budget. Buying “aged” Twitter profiles at an added cost (versus creating brand new accounts) provides more legitimacy to actors creating an inauthentic amplification network. In addition, many of the accounts use all of the profile space to speak to their political orientation rather than provide personally identifiable information.
The accounts in this cluster have also displayed a commitment to amplifying controversial and divisive messages that contain strong associations with conservative and alt-right ideology. Typically, a normal person has a variety of interests and posts content related to multiple topics. However, coordinated networks of accounts often focus on one particular issue, or target people of a specific political orientation. In this cluster, the overwhelming majority of accounts exclusively feature content associated with a specific ideology. It is not the political orientation of these messages, but instead the exclusive topical focus of these messages which indicates inauthenticity.
Conclusion
Disinformation campaigns are only effective when they reach a human audience. In this case, humans propagated the conspiracy authentically across platforms, amplifying the narrative more effectively than a network of coordinated accounts. This pattern of spread is highly effective at damaging Wayfair’s brand, regardless of the origin or validity of this narrative.
Our dataset for the Wayfair conspiracy can be found at the following link: https://github.com/memeticinfluence/wayfair
You can analyze this data and the spread of any coordinated imagery here: https://www.maltego.com/blog/mapping-visual-disinformation-campaigns-with-maltego-and-tineye/
Visit us at www.memeticinfluence.com