Content Warning: Tik Tok

Tik Tok is one of the few social media platforms in today’s age that does not feature content warnings labeling false information. Image from CNN

Every day, millions of users log onto social media sites to scroll, like, share, and comment on a seemingly endless array of posts. These posts range from life updates, silly dances, and animal pictures, all the way to news stories, tragic personal events, and cries for help. Social media is no longer just a place where people can connect individually online, but now it harbors a vast ecosystem of news and information. However, with the spread of news, comes the spread of information disorder. Many websites have implemented features to help minimize the effect of some of the misinformation, malinformation, and disinformation that perpetuates online sites. Tik Tok, on the other hand, thrives off the spread of this misinformation, developing an environment where information disorder prospers.

What is information disorder, and why is it so bad?

Information disorder is the allowance of misinformation, malinformation, and disinformation to spread. Claire Wardle describes these forms of false information as the following: satire or parody, misleading content, imposter content, fabricated content, false connection, false context, or manipulated content. In layman’s terms, these features are typically referred to as “fake news”. These dangerous forms of content are especially harmful when they are spread vastly and quickly, reaching a large audience in a short amount of time, before they can be debunked. Wardle particularly examines the effects of the spread of misinformation surrounding politics, and how such news can greatly influence political elections, policies, and general public opinion. This false information has a very real influence on the lives of users outside of social media.

Facebook (left) and Instagram (right) both have content warning labels for information that has been identified as false or misleading. Image from The Verge

Many social media platforms such as Facebook and Twitter have some form of censorship for the spread of misinformation, labeling content falling under the category of information disorder as “false information”. However, Tik Tok has no such feature that warns users of this information. In a CNN article written by Emma Tucker, a research report has found that a significant portion of the content presented to Tik Tok users, contain some form of misinformation, without any warning or flag presenting it as false to its users. Users are left to identify false information on their own, leaving comments, stitches, and duets on such videos.

How can information disorder spread on Tik Tok?

Misinformation can be easy to find when you are looking out for it, but when topics arise that are out of touch, sometimes it’s easier to trust the information, rather than to research the true facts of the matter yourself. As someone who has grown up in the age of the internet, I am well aware that there is plenty of information disorder spread online. I can identify when something looks suspicious based on grammatical errors, when someone is providing clearly biased information, or when a post discusses a topic surrounded by misinformation, such as the Covid pandemic. However, if these identifying markers don’t raise an alert for me, I have become used to websites providing me with all of the information necessary to determine if something is fake or real. All of the other social media platforms that I frequent on a daily basis outside of Tik Tok establish some form of scanning for misinformation, and label these posts as such. Additionally, I recently have assumed that if a viral or popular post is a promotion of information disorder, at least one commenter will label it as such. I trust that either the website or its users will alert me if something is incorrect. Because Tik Tok does not provide content markers for false information, and because Tik Tok allows creators to filter their comment sections on individual posts, this strategy of identifying misinformation that I have grown accustomed to across other websites is ineffective when applied to Tik Tok.

Many times I have fallen victim to the spread of information disorder that is so widespread on Tik Tok. Just recently, I was scrolling through my “for you” page, and came across a video of a self-proclaimed lawyer claiming that Idaho still uses firing squads as a means of implementing the death penalty. I, having no previous legal knowledge of the death penalty, decided to trust this creator. After all, wouldn’t a lawyer know much more about this topic than I would? Before sending this seemingly astounding fact to my friends, I decided to scroll through the comments a little bit to ensure that I wasn’t being misled. Many commenters were also appalled that Idaho would still use a firing squad, but none of the top comments indicated that this fact may be untruthful, so I went ahead and sent it to a large group of people. It was only later, when a friend responded saying that this fact was not true, did I bother doing the research myself and finding that no, Idaho does not currently use firing squads for people facing the death penalty. If I had not trusted the website alone and done one simple search, I would have prevented spreading this misinformation, but instead, I contributed to its growth.

In an experiment conducted by the Wall Street Journal to test the Tik Tok algorithm, experimenters found that from only a couple hours of scrolling, the algorithm begins to “rabbit-hole” users into specific categories, including political rabbit holes such as far-right or far-left categories. I have fallen down into some of these rabbit holes by mere fascination with the elaborate and extreme claims that have been made by some of the far-leaning content creators. I have found that once I navigate to the page of one of these users, the next time I refresh my “for you” page, I am met with significantly more content promoting these ideas, even though I only watched the videos for their shock value in the first place. One of the greatest problems of these rabbit holes is that they are difficult to escape, and, once a user is in one of these holes, they will only receive one-sided information. Creators seeking to debunk the misinformation found within a rabbit hole would be unable to reach the rabbit-hole audience, as the rabbit-hole audience would only be given content supporting the rabbit-hole misinformation, not the content negating the claims. This allows the misinformation to spread amongst the audience supporting its claims, while drawing in related users into the same rabbit hole, leaving them with no information to combat the claims.

How does Tik Tok benefit from information disorder?

As with all social media platforms, Tik Tok’s mission is to gain as much user engagement as possible. Showing misinformation to users can increase engagement in a variety of ways. For one, users that believe a claim of misinformation can feel validated and supported through the process of being “rabbit-holed” into more information supporting the claim. They are more likely to watch, share, and comment on videos that support their ideas, rather than videos that are attempting to tell users that their claims and ideas are incorrect. Additionally, misinformation can increase user engagement through shock-value, just as I had demonstrated through my own experience surrounding a video regarding the Idaho death penalty. Facts that sound believable, yet slightly out of the ordinary, are fun to share and discuss with others. Videos containing such information can have a great increase in the amount of users that share a video, which in turn leads to an increase in engagement on the platform.

Tik Tok gains most of their monetary value through the sale of advertising space on their platform. With increased user engagement, more users see more advertisements, therefore making their advertising space more valuable. However, even the advertisements found on Tik Tok have been found to be liable to misinformation as well. A recent CNN Business article analyzes the results of an experiment where New York University researchers submitted a selection of advertisements containing misinformation to Tik Tok to see what portion of these articles would be approved on the site. It was found that 90% of the submitted ads were approved by the platform. In comparison, other sites, such as YouTube, were able to identify most, if not all, of the ads as misinformation and rejected their requests. It is clear that there is not a lack of technology to identify misinformation, but rather a lack of implementation of this technology in the advertisement approval process. Tik Tok chooses to ignore the potential for misinformation in favor of increasing the amount of advertising space sold.

Tik Tok thrives on increasing advertisements and presenting content to its users that the user will find intriguing. If a user believes in provenly false conspiracy theories, Tik Tok will gain more engagement by showing that user more “proof” of that theory to the user, rather than showing content covering both sides of that theory to the user. Because of this, Tik Tok encourages users to pursue false claims in hopes of the platform receiving more views and interactions. In the online battle between spreading truth and increasing engagement, Tik Tok has chosen financial gain over the well-being of its users.

--

--