Social Media Algorithms: The Code Behind Your Life

Encode Justice Canada
7 min readFeb 14, 2022

--

Introduction

Social media has become a major part of any young Canadian’s life. Platforms like Instagram, Facebook, and Snapchat allow users to connect at any time from anywhere, while TikTok and YouTube provide endless amounts of content and entertainment. Consequently, users should know what happens “behind the scenes” of their most-used apps to understand the impact they may have on their lives. Furthermore, the dangers — like the benefits — of social media are many; we therefore explore regulations that can mitigate some of these harms.

How Social Media Algorithms Work

The TikTok Recommendation Algorithm

TikTok’s recommendation algorithm feeds videos to users through the app’s ‘For You’ page. After only 36 minutes of watch time (around 224 videos), the algorithm starts delivering accurate results, determining individual users’ likes and dislikes and accurately recommending content. The algorithm uses a range of information to assess video engagement, including views, likes, shares, hashtags, audios, followed creators, and video engagement to determine its recommended content.

Video engagement — i.e., whether a video is fully watched or not — creates a “filter bubble” to categorize a user into a specific niche or community of videos representing the user’s likes and dislikes. TikTok “feeds” users’ content: on the app, you can only see what TikTok thinks you would like based on the information received from your engagement.

The YouTube Recommendation Algorithm

YouTube’s recommendation algorithm currently facilitates over 700,000,000 hours of content watched every day. The algorithm drives more than 70% of total watch time and is thus one of the biggest deciders of what people see. YouTube functions similarly to TikTok, creating a filter bubble that heavily weighs video engagement to determine personal recommendations.

Why should this matter to me?

Entering filter bubbles or “echo chambers” may significantly affect how you see the world and make you feel anger or distrust towards other groups of people who disagree with your viewpoint. In addition, these bubbles can be sources of political polarization, informational barriers, and confirmation bias, leading to increased tensions between different groups concerning their bubbles. Filter bubbles and echo chambers are invisible and can be harmful, so it is important to recognize when you have fallen into one and engage with other content.

Social Media Monitoring and Regulation

Without taking away from the great benefits that social media has given us, it is important to consider the negative aspects of this world-changing technology. First, social media simplified the sharing of harmful content, like hate speech and terrorist propaganda. As seen in the past, with misinformation campaigns looking to interfere with elections, namely the U.S. 2016 Presidential Elections, numerous negative effects happen due to the lack of moderation of online content.

Moderating Hate Speech: Meta Case Study

The main controversy around regulating content is balancing moderation with fundamental rights like freedom of speech, with governments, academics, and others debating whether and how to hold tech companies accountable for the actions of users on their platforms. Meta mainly relies on its AI models to detect and remove hate speech: according to the company, 97% of hate speech on the platform is detected by automated systems before being flagged by a user. However, according to reports obtained by the Wall Street Journal dubbed the “Facebook Files”, these systems can only remove 2% of hate speech on Facebook. While this is a significant discrepancy in results, one thing is certain: automated recognition of harmful content online is difficult.

Regulating Online Content

The regulation of social media platforms is essential because the written and unwritten rules governing civil society are crucial to maintaining peace and safety. To address the concerns regarding new technologies, regulators should ensure incentives for companies to act responsibly, whether they be positive or negative reinforcements. Furthermore, regulations must consider the global nature of the internet. Social media involves cross-border communications; therefore, international cooperation is necessary. While regulators must consider the impact of their decisions on freedom of expression, they should also look to deepen their understanding of what technology can and cannot do regarding content moderation, allowing companies the flexibility to innovate. Finally, regulators must address the severity and prevalence of harmful content by determining its legal status and creating new rules regarding content if necessary. With a poorly designed legal framework around these issues, negative consequences can lead to a less safe online environment.

Free Speech on Social Media Platforms: Canadian Government Regulation

As a co-founder of the Media Freedom Coalition, Canada is a global leader in promoting freedom of expression. As written in the Canadian constitution, Canada is committed to “freedom of thought, belief, opinion and expression.” With the rise in online activity, Canada became a member of the Freedom Online Coalition, dedicating itself to the promotion of Internet freedom. Yet Canada, like many other countries, has wrestled with how to regulate online activity without infringing on freedom of expression.

Bill C-10

The Canadian government has presented some notably controversial bills to address concerns around internet services. As an attempt to extend the influence of the Broadcasting Act into a more modern context, the Canadian House of Commons passed Bill C-10 in June 2021. If it became a law, Bill C-10 would require streaming services and social media platforms to promote content by Canadian creators. However, this proposal received backlash from human rights and internet activists for fear it would give the CRTC too much power over digital platforms and infringe on freedom of expression.

A Technical Paper

In July 2021, the Canadian Government released a technical paper exploring potential ways of addressing and monitoring harmful social media content. The proposal focuses on addressing the “five types of harmful content,” described as “child sexual exploitation, terrorist content, content that incites violence, hate speech, and the non-consensual sharing of intimate images.” This proposal has received notable backlash from human rights and advocacy groups, who claim that, among other issues, these policies would threaten Canadian values of freedom of expression and liberal democracy.

The technical paper aims to regulate Online Communication Services (OCSs), whose primary purpose is to enable users to communicate with other users over the internet. The Government’s proposal would require OCSs to remove “harmful content” on their platforms within twenty-four hours of identification. This requirement echoes Germany’s controversial NetzDG law, in which many authoritarian regimes have modeled their online censorship laws, concerning Canadians. The Samuelson-Glushko Canadian Internet Policy and Public Interest Clinic at the University of Ottawa argue that, in order for the government to flag potentially harmful content, all content must be monitored, giving the government access to all user content.

The government’s proposal also includes reporting user data, such as user content and activity, to the Royal Canadian Mounted Police and Canadian Security Intelligence Service, concerning many human rights advocates. Advocates argue that allowing the government and law enforcement access to users’ private information is inconsistent with Canadian values of freedom and democracy.

Another point of contention is the government’s recommendation that OCSs use “automated systems” to identify “harmful content.” Not only can computer algorithms be biased, but they are also unable to differentiate between illegal content and ‘illegal’ content that is used in a legal context, for example as educational or news content. Studies have also shown that social media platforms take down content from marginalized communities at a disproportionate rate compared to more mainstream content, suggesting this regulation may further silence marginalized voices. As one of the stated primary focuses of the Canadian Government’s online regulation is to address online hate, especially its disproportionate effects on marginalized groups like Women and Indigenous Peoples, the proposed legislation may counteract its goal.

After the publication of the Government’s proposal to address the sharing of harmful content online, many human rights advocacy groups have published responses with recommendations, as described above, on how to better achieve the Government’s goal while preserving fundamental Canadian values of freedom and democracy. The public feedback represents a greater struggle for governments to regulate social media content, but the hope of a better regulated internet in the future.

By: Nils Aoun, Itai Epstein, Sara Parker, and Cella Wardrop

Sources:

Bickert, Monika. “Charting a Way Forward on Online Content Regulation.” Facebook. Published February 17, 2020: https://bit.ly/3CIJb2T.

Bolongaro, Kait. “Trudeau’s Party Passes Bill to Regulate Social Media, Streaming.” Bloomberg.com. Published June 22, 2021: https://www.bloomberg.com/news/articles/2021-06-22/trudeau-s-party-passes-bill-to-regulate-social-media-streaming

Chaslot, Guillaume. “The toxic potential of YouTube’s feedback loop.” Wired. Published July 13, 2019: https://www.wired.com/story/the-toxic-potential-of-youtubes-feedback-loop/.

Díaz Ángel and Laura Hecht-Felella. “Double Standards in Social Media Content Moderation.” Brennan Center for Justice. Published August 4, 2021: https://www.brennancenter.org/sites/default/files/202108/Double_Standards_Content_Moderation.pdf.

Freedom Online Coalition. “FREEDOM ONLINE COALITION: Factsheet. Published 2021: https://freedomonlinecoalition.com/wp-content/uploads/2021/05/FOC-Factsheet-2021.docx.pdf.

Geist, Michael, “Picking Up Where Bill C-10 Left Off: the Canadian Government’s Non-Consultation on Online Harms Legislation.” michaelgeist.com. Published July 30, 2021: https://www.michaelgeist.ca/2021/07/onlineharmsnonconsult/.

Global Affairs Canada. “Media Freedom Coalition ministerial communiqué.” Government of Canada. Published November 2020: https://www.canada.ca/en/global-affairs/news/2020/11/media-freedom-coalition-ministerial-communique.html.

Government of Canada. “Bill C-10: An Act to amend the Broadcasting Act and to make consequential amendments to other Acts.” Published September 2021: https://www.justice.gc.ca/eng/csj-sjc/pl/charter-charte/c10.html.

Government of Canada. “Human rights and inclusion in online and digital contexts.” Published November 2020: https://www.international.gc.ca/world-monde/issues_development-enjeux_developpement/human_rights-droits_homme/internet_freedom-liberte_internet.aspx?lang=eng.

Keller, Daphne. “Five Big Problems with Canada’s Proposed Regulatory Framework for “Harmful Online Content.” Tech Policy Press. Published August 31, 2021: https://techpolicy.press/five-big-problems-with-canadas-proposed-regulatory-framework-for-harmful-online-content/.

Meta. “We support updated regulations on the internet’s most pressing challenges,” Facebook. Published 2021: https://bit.ly/3DVzob5.

Matsakis, Louise. “How TikTok’s ‘for you’ algorithm actually works.” Wired. Published June 18, 2020: https://www.wired.com/story/tiktok-finally-explains-for-you-algorithm-works.

Newton, Casey. “Facebook’s proposed regulations are just things it’s already doing.” The Verge. Published February 19, 2020: https://bit.ly/3DSmhYg.

Nicas, Jack. “How YouTube drives people to the internet’s darkest corners.” The Wall Street Journal. Published February 7, 2018: https://www.wsj.com/articles/how-youtube-drives-viewers-to-the-internets-darkest-corners-1518020478.

Schroepfer, Mike. “Update on Our Progress on AI and Hate Speech Detection.” Meta Newsroom. Published February 11, 2021. https://about.fb.com/news/2021/02/update-on-our-progress-on-ai-and-hate-speech-detection/

Stevens, Yuen and Vivek Krishnamurthy. “Overhauling the Online Harms Proposal in Canada: A Human Rights Approach.” Canadian Internet Policy and Public Interest Clinic, 2021.

Wired. “Is the YouTube algorithm controlling us?” YouTube. Published November 19, 2020: https://www.youtube.com/watch?v=XuORTmLhIiU.

WSJDigitalNetwork. “How Tiktok’s algorithm figures you out | WSJ.” Youtube. Published July 21, 2021: https://www.youtube.com/watch?v=nfczi2cI6Cs&feature=embtitle.

Zwibel, Cara. “Submission in relation to the consultation on addressing harmful content online.” Canadian Civil Liberties Association. Published September 25, 2021: https://ccla.org/wp-content/uploads/2021/09/CCLA-Submission-to-Heritage-Online-Harms.pdf.

--

--