To save themselves from weaponization, social media must become more transparent to users.

Nicholas Proferes
4 min readMar 19, 2018

--

image used under CC license from www.quotecatalog.com

Most users have poorly developed understandings of how social media platforms like Facebook and Twitter work. I say this based on not just my own research into how users think Twitter works, but also based on numerous other studies that have observed users’ misunderstandings about platforms such as Facebook. The problem that arises from this isn’t just that users sometimes forget the right privacy settings and accidentally disclose information that they later regret. The problem we are witnessing is that our lack of understanding of how information flows through these systems has become weaponized.

The revelation that Cambridge Analytica used data gathered from personality tests linked on Facebook, as well as information scraped from test-takers’ Facebook friends, to target individuals during the 2016 Presidential election is just one example of this kind of weaponization. Would users consent to share such information had the actual consequences been explicitly spelled out? Some may have, but none were ever able to choose on those terms. Users are also continuously setup by social media companies to not understand how their own information flows beyond their immediate experiences into a wider ecosystem. It’s the encouragement of a mental state I’ve been calling information flow solipsism. Behavior like Cambridge Analytica’s flourishes in an environment where we don’t understand the communicative context in which we operate.

I’ll give you another example. What does it mean for something to trend on Twitter? Does it mean that someone’s paid money to Twitter to have a topic show up as trending? Does it mean that lots of people are talking about a topic for an extended period of time? How long? Does it mean that lots of people are talking about a topic they weren’t talking about yesterday? Does it mean that groups of automated accounts are all generating Tweets? Conversely, when something doesn’t trend, does it mean that someone at Twitter found the topic potentially offensive? The real answer is that trending can mean any and all of the above. But what could happen if we think that Trending is only one and not the rest? What inferences might we make when we see something trend?

We might imagine that it means a particular topic is at least culturally important, as determined by other people. But, in Congressional testimony, a VP at Twitter recently revealed that at least 5% of the tweets around the #PodestaEmails were generated by bots. In my own research, I found that Podesta related hashtags hit town-wide, country-wide, or worldwide Trending topics lists a total of 1917 times. These bots helped keep Podesta related hashtags on Trending Topic lists everyday within the U.S. for the entire month before Election Day. Newspapers, which use Trending Topics to identify breaking news, began to pivot to the story not just to report on not just the content of e-mails, but the fact this content was trending. Suddenly, it wasn’t just users that had to make sense of what it meant for something to trend, but non-users as well. In repeating the information, newspapers unintentionally laundered a deeper understanding of why something might trend.

Users (and certainly non-users) don’t know exactly how and why something trends. This is partially because it’s a trade secret. But such lack of transparency around the algorithm is becoming more and more dangerous when there are bad actors that want to take advantage of this knowledge gap. Knowing why something trends is critical for making sense of the trends we see.

Without a robust understanding of how these platforms work, it’s hard to imagine how others might use these systems, to know whether we interact with person or bot. Without this knowledge, it’s hard to make sense of the information we encounter on social media (and off), to contextualize it, to understand its origin, the path it took to us, and the invisible hands that may have touched it along the way. It is hard to know where our information is going or where other information has been.

While it’s easy to blame users for a lack of digital literacy, social media companies must shoulder some of the blame. As I’ve found in my work, these organizations frequently structure both their systems and their messaging to users so that users are able to start using a system right away, but not to develop deep knowledge. Most users don’t read Terms of Service (and why would they when the are frequently vague about the specifics, written at a college-level, and frequently would require more than 50 minutes to parse). They instead build their knowledge through trial and error, developing understandings by interacting with a system. Feedback mechanisms within an interface become tools by which users learn a technology. And feedback mechanisms are typically oriented towards keeping users sharing, following, and consuming advertising — not building anything beyond surface level knowledge.

To fight the weaponization of these systems, social media companies must make their products more transparent. They need to actively disclose how information flows to users in ways that users can make sense of it. They need to provide feedback mechanisms within the platforms so that users can build this knowledge through experience, not just relying on a ToS that no one will ever read. Making these changes won’t be easy, but as social media becomes more ubiquitous in culture, social experience, economics, and deliberative democracy, it is crucial.

--

--

Nicholas Proferes

Nicholas Proferes is an Assistant Professor at the University of Kentucky’s School of Information Science. He studies users, social media, and tech discourse.