An Informed Opinion May Be A Thing Of The Past
“World War III is a guerrilla information war with no division between military and civilian participation.”
— Marshall McLuhan (2015). “Culture Is Our Business”, p.66, Wipf and Stock Publishers
Will my next vote be based on accurate information?
I struggle to think that my vote will have any significant impact in the 2020 elections. I was part of the group of hopeful millennial voters who had their first voting experience be for President Obama back in 2012. After an experience like that, I was very optimistic when I discovered Bernie Sanders was running for President and I was adamant that he was the best choice. However, throughout the 2016 election cycle, I found myself confused by both the sheer amount of content made against the Bernie brand and the fact that a lot the information presented to me at the time about his opponents may or may not be true. The advent of fake news made me dubious of almost everything I saw online.
I know how to go vote, but I don’t know how to be adequately informed before making a concrete decision. I’m sure I am not the only one in my generation who shares this sentiment. This dilemma stems from a variety of issues that have come to the forefront since the 2016 presidential election. To see why these issues make me hesitant to vote in 2020, consider what it is that people in their early twenties care about. We care about employment and finding ways to pay off our student debt, we care about making an impact on the world, and we inform ourselves mainly through the use of our phones. With that in mind, the issues that make me feel like my vote in 2020 won’t be as informed as I want are the spending of propaganda and disinformation on social media; the fact that I’m stuck in an inescapable filter bubble online (regardless of the platform I use).
Disinformation in social media
One of the contributing factors to President Trump’s rise to power was due to Russian meddling in social media. The interference can be directly traced back to Russian troll farms, made to disseminate disinformation and social discord on platforms such as Facebook, Twitter, YouTube, Reddit, Tumblr, and Medium. They spread memes systematically to both users who leaned on the left and right of the political system, creating insidious propaganda that was shared amongst various communities.
The most infamous of these troll farms, The Internet Research Agency, was responsible for interfering with the 2016 U.S. election by using memes and other forms of content to spin narratives about politics and American culture. With an estimated number of 400 employees and thousands of social media accounts at their disposal, this troll farm has the agenda of making sure that the world is Pro-Kremlin and Pro-Putin. During the 2016 election cycle, the Internet Research Agency used content explicitly targeted at the black community to influence their opinion of presidential candidate Hillary Clinton. According to Renee Diresta, an ideas contributor for WIRED magazine, ”[Content targeting the left] included messages aimed at depressing turnout among black voters, or painting Secretary Clinton in a negative light compared to Jill Stein or Senator Bernie Sanders.”
Renee’s concern with getting rid of disinformation on social media stems from the lack of tools we have on identifying content made by Russian troll farms such as the IRA and preserving free speech on these social platforms. Senator James Risch was quoted saying, “The difficulty is, how do you segregate those people [foreign adversaries] who are doing this from Americans who have the right to do this?” What concerns me is that the general public doesn’t have the tools nor the time to discern from real content and propaganda sent from Russia. We consume our content at blazing speeds, scrolling through several posts until one catches our eyes either headlines or imagery. We don’t necessarily take the time to review one news story through several different sources. No one has the time to do this if they don’t work in journalism or are tasked specifically to find fake news online. What adds to my concerns isn’t only disinformation from the Internet Research Agency, the IRA also leverages a product of our general social media use called filter bubbles.
Strengthening bias through filter bubbles
A filter bubble is a personal reality that is meant to adhere to the biases you already have about the world. The spreading of disinformation and the manipulation of discourse via social media is possible because we aren’t aware of the filter bubbles we are in. The Internet Research Agency cannot create propaganda that targets each user on Twitter for example, because the amount of research and time it would take to change the viewpoints of a single individual would be costly. However, the agency leverages group-think to target niche communities in America.
Due to the spread of tribalism, filter bubbles are commonplace in the daily use of social media users. They have become such a problem that Twitter CEO Jack Dorsey has highlighted the issue in several recent podcast interviews with the likes of Joe Rogan and Sam Harris. The Internet Research Agency creates content to strengthen already established norms utilizing filter bubbles. If you lean left or right, the main pillars of thought and topics of conversation become more reinforced. They can leverage these conversational norms online to create incidents like what happened in Houston, Texas in 2016 when an anti-Muslim rally was organized in front of the Islamic Da’wah Center’s Library of Islamic Knowledge by a group called “Heart of Texas.” The rally was orchestrated by the IRA to create further discord offline, showing how their propaganda can spread past social media into the real world.
What makes the IRA terrifying is that this only the beginning of disinformation on social media. We should begin to anticipate the incorporation of new technologies, such as videos and audio produced by artificial intelligence, to supplement these operations, making it increasingly difficult for people to trust what they see.
The term “Deep Fakes” (videos first created to place famous actresses into pornography) will be mentioned more often within the news as the election comes closer. See below for a video from BloombergQuickTake on the advancement of “Deep Fakes” and how face-recognition software is used to make fake videos of famous politicians:
If exponential change can occur with any other form of technology, how long will it take for Deep Fakes to become indistinguishable from real videos? Your favorite politician can be attacked with this form of propaganda at any moment, and you wouldn’t know.
Information we get is only as useful and credible as our ability is to curate and research more than one source. The issue is that no one has the time during the day to do extensive research on a particular topic.
Additionally, it takes a considerable amount of time and effort to look for opposing points of view outside of your filter bubble (your online community). I fall victim to this as well. Because I identify as a moderate-left and consume content from news organizations within that political bubble, I rarely see content from the right unless I actively look for it. In turn, how am I to be sure that the content I am seeing about any one particular presidential candidate is going to be true? Perhaps I am too paranoid, but I don’t want to vote for someone (or decide not to vote for someone) based on content I found online, only to later discover that the content I used to base my decisions on was made by a troll farm in Russia.
- The Information War Is On. Are We Ready For It? — Wired
- How the Russians pretended to be Texans — and Texans believed them — Houston Chronicle
- The Agency — The New York Times
- It’s Getting Harder to Spot a Deep Fake Video — Bloomberg (Youtube)
- Making Sense Podcast with Sam Harris, Episode #148: Jack Dorsey
- How Filter Bubbles Distort Reality: Everything You Need to Know — Farnam Street Blog