You’ve been Trumped.
Even on Twitter, you and Trump aren’t on the same playing field.
Another election year bites the dust, but the repercussions of this election are sure to be felt for years to come. The 2016 presidential elections revealed the increasingly competitive role of social media during the campaign season, and not exactly in the spirit of democracy.
Whether it was the candidates themselves tweeting, or their followers rallying support with the use of #ImWithHer or #MAGA, Twitter played a divisive role in either boosting a candidate up, or essentially bashing the other candidate to Hell and back. Enter: Mr. Donald J Trump.
The questions still remain: Did Donald Trump win the election because of social media? How did he get so many retweets?
The 2016 presidential campaign sparked the first mass controversy of “Social Media Fakery,” a term coined by Andrew McGill. Trump and his supporters were heavily criticized for their use of social media during the campaign season, with much speculation as to how he was able to rally so much support on Twitter with his hateful and propagandistic tweets.
#DidYouKnow: Many accounts who were retweeting and following Trump were actually Twitter “bots”? McGill defines these as “automated accounts that exist only to extend the social reach of whoever hires them.” The bots seemed to juxtapose the progression of the campaign season — as election day approached the candidates got, shall we say, a little less civilized as the bots became increasingly sophisticated. By this, I mean it was more difficult to figure out which accounts were bots spreading automated propaganda and which accounts were real people, voicing their support of Trump in the forms of emotional and verbal abuse online. To make matters more difficult, researchers found that when they publicly identified automated accounts, the bots’ creators would start tweeting and claim that the account had been run manually the whole time.
Though this is an extreme example that is blatantly offensive, social cues are absent in even the subtlest forms of mediated communication (i.e. on social media, over texting, etc.). Scholars who study online sociality think about how we present ourselves online in terms of social cues. Media scholar Nancy Baym explains this concept well when she states that when you are talking to someone in person, you can communicate verbally and nonverbally. On most social media, like Twitter, you can’t. Think: you can’t see someone’s facial reaction to something you say, or read it in the tone they intended for it to be read. Because of a lack of social cues on Twitter, it seems like Trump and many of his supporters (bot or real) just said “eh, what the hell” and tweeted whatever they pleased, no matter how offensive. They normalized this behavior on Twitter, to the point that it became a go-to topic of conversation during the campaign season.
Twitter is also a versatile social networking platform, serving as a place where people can curate news quickly, engage with the opinions of others and see both sides of any given story. The 67 million active Twitter users (as of the third quarter of 2016) engage with the platform in many different ways to serve a variety of different purposes, which in turn effects how people interact with each other on Twitter. This is because of their differing media ideologies: users have wildly differing views, attitudes and beliefs about what is acceptable “Twitter protocol” (how often to tweet, what can and cannot be tweeted, etc.). This is especially prevalent when it comes to talking about politics. While most ideologies are beliefs shared by a group or society, this truth is not upheld in such a versatile, mediated space.
Twitter users’ differing media ideologies contribute to a lack of unified social cues on the platform as a whole, making it hard to monitor what is and is not acceptable to tweet. The 2016 presidential election revealed that because of this lack of social cues, the difficulty to discern what is real and what is fake on Twitter led to the destruction of a notion of “online equality” for users on the platform, and rather led to abusive interactions and relationships — certainly not in the spirit of democracy.
Trump’s Twitter traffic is comprised of about 80 percent artificial accounts (his ‘cyborg army’ as Ben Schreckinger calls it), compared to about 50 to 55 percent for Hillary Clinton, which is considered “normal” for a public figure. These bots unleashed their wrath when an account attacked Trump, and would usually besiege these attackers with thousands of tweets.
“There is very clearly now a very conscious strategy to try to delegitimize opposition to Trump” — Patrick Ruffini
Republican strategist Patrick Ruffini “had 30,000 mentions over a weekend” after he tweeted something averse to Trump’s interests, and claims, “there is very clearly now a very conscious strategy to try to delegitimize opposition to Trump.” Ruffini’s media ideology regarding Twitter made him feel as though the content he was posting was acceptable, albeit controversial, but that’s nothing new for Twitter.
So, where should the line be drawn? Ruffini was justified in posting his opinion since Twitter lacks social cues dictating what can and cannot be posted about politics and how those messages should be received by others. On the other hand, the bots, which have no concept of social cues, were provoked and instructed to essentially take over Ruffini’s twitter feed. This is contrary to the dominant media ideology (acceptable “Twitter protocol”) of Twitter, in which most users accept tweets and replies back and forth, but would not particularly appreciate being bombarded by 30,000 tweets in a weekend.
Shreckinger claims this is because they are “seek[ing] to shape the information environment that voters encounter by harassing critics into silence, spreading misinformation, energizing supporters by making them feel like part of a thriving movement, making the expression of fringe views appear socially acceptable, and shutting down conversation.” Since Twitter did not have strict enough policies in place to stop the bots, the bots essentially shut down freedom of speech on Twitter based on a computer-generated political hierarchy that undermined the potential of social equality on the platform.
With no concept of social cues, these bots defy all typical usage of Twitter: they tweet hundreds of times in an hour, they are programmed to attack anyone who disagrees with Trump, and the rest of their feed is filled with nonsense. Each of these accounts tweeted excessively during the campaign, on average 200 times per day. @DRJAMESCABOT was a popular twitter bot during the campaign season, and his Twitter has since been filled with an excessive obsession with the Rolling Stones and an apparent “mistress he wants for Christmas” — sometimes retweeting photos upwards of 50 times a day.
Somewhat relieving is the fact that this extreme behavior is a further indicator that there is no way these accounts could be humans. Recognizing that the tweets are automated by bots might ease some tensions, for those who feared for humanity based on what these bots were tweeting.
“[Bots] shape the information environment that voters encounter by harassing critics into silence, spreading misinformation, energizing supporters by making them feel like part of a thriving movement, making the expression of fringe views appear socially acceptable, and shutting down conversation” — Patrick Ruffini
Of the most controversial in the “bot or not” debate is Jennifer Mayers, who claims to be wife and mother living in Louisiana. Based on her profile and a select few of her tweets, she claims to be a devout Christian and a lover of Reba McEntire. Her self-presentation in her profile makes her out to be a pretty normal person. However, her tweets suggest otherwise.
The content that Jennifer posts seems like it would come from a human (albeit offensive and ridiculous), with varying topics, a normal number of tweets per day and a following of real accounts. She has a profile picture and even a blog. But how could a “real” person tweet such hateful, racist thoughts?
Jennifer Mayers is most likely a real person, not a bot, but her offline identity is probably not Jennifer Mayers. A Lexis Nexis report revealed that there is no woman of her name and age living in the area where she claims to live. Additionally, she was “interviewed” about one of her tweets for a Medium post, but the interview took place entirely over email.
Jennifer presents an example of the jumbled mess that fueled the Twitter bot debate during the campaign season. With the increasing sophistication of bots throughout the campaign season, it was unclear whether she was the authentic person she presented as online, or if she was in fact, just a computer generated responder. She created a fake account that acted as a bot in many instances to support Trump and bully others online with differing viewpoints. With a lack of understanding of the dominant media ideology that most Twitter users observe, Jennifer stifles free speech for those who disagree with her, while also fully exercising her right to free speech in an odd sort of expression of her beliefs. The apparent lack of social cues on Twitter has made her feel invincible and powerful, contributing to her lack of understanding of social equality online and in reality.
She legitimizes what media scholar Nancy Baym describes as a “dystopian” view of social media, basically validating everyone who thinks that social media and technology is going to ruin our ability to communicate interpersonally. The lack of social cues in this case, destroys relationships, offends pretty much everyone who sees the messages, and makes us wonder how we could ever “connect” with people online.
Social media, and Twitter specifically, has widely been argued as a democratic force that presents more social equality than can be seen in reality. The hateful, attacking words that were spewed by bots and fake accounts with a lack of social cues during the campaign season broke down that ideology, revealing the necessity for Twitter to more closely monitor these types of accounts and content. Though they are taking steps to do so, including the addition of the “mute” function which will allow users to block certain words and phrases from coming up in their notifications, there is still a significant amount of work that needs to be done to balance the free expression of users’ opinions and how they are presented and interacted with.