Literature Review: A Dive into the Depths of Bias in Algorithmic Personalization

Emil La Marca
Media Studies COM520
22 min readDec 14, 2021

by: Emil La Marca

Executive Summary

Digital information platforms such as Google and social media channels have slowly replaced traditional media channels, thus partly becoming custodians of our modern society. To address the burden of the large amounts of data available on the internet on the average user, web search engines and social media platforms have introduced advanced personalization features that deliver content to match specific user needs and preferences. Personalized algorithms filter information per individual needs. However, despite the significance of personalization algorithms in providing user-specific content, many concerns have arisen. A growing problem is the introduction of potential biases and the impact on user behavior and perception, especially when the information may potentially mislead or misinform the users.

Introduction

When individuals want to discover some information or content on their smartphones or other devices, they use search engines that respond to the typed queries with a ranked list of links that are supposed to be the most corresponding to the topic in question. Worldwide, Google is the most commonly used search engine; however, social media sites such as Facebook are important sites where individuals seek information and content on everyday topics. As a result, personalized algorithmic technologies have become an integral component of search engines. Algorithmic technologies have become pervasive in the society; they exert significant impact on virtually every aspect of life, including determining the kind of information and news we receive via information filtering algorithms (Kulshrestha et al., 2019). Search systems represent one of the most important algorithms. In day-to-day aspects of life, individuals rely on search engines to accomplish a wide variety of goals — that range from discovering specific content or website (navigational queries) to finding more information about current issues of interest, events, people, and entities (informational queries) (Yusuf et al., 2019). For example, during important events such as elections and pandemic seasons, individuals conduct repeated queries on circumstances and political candidates on the internet and social media platforms such as Twitter and Facebook to learn more and discover other people’s opinions (Mustafara et al., 2020).

Although informational search algorithms aim to ensure greater user experience and insight about a particular topic, this information is not always necessarily free from bias. For instance, one concern about the unfolding COVID-19 pandemic and vaccination programs is the impartiality of information available on social media sites which drives vaccine hesitancy, thus hampering efforts to contain the virus (Piedrahita-Valdés et al., 2019). In addition to impartial information, the potential for bias, especially for polarizing topics, is a significant concern. For instance, studies have shown businesses exploit false information to drive sales through social media campaigns. This is because false news spreads faster than legitimate information, and people tend to click on the most highly ranked link.

The potential biases that search algorithms introduce in query results have led to a growing concern about the impact of search systems on user behavior. This is particularly concerning for situations that may potentially mislead or misinform the user on important topics such as vaccination, racial relations, climate change, and politics. Bias may arise due to the ranking of results based on user queries. The more the queries, the higher a particular topic is ranked. To this end, a study by Pan et al. (2007) demonstrated that highly rated results generate more trust from the users, thus influencing user opinions and perceptions.

This paper is a systematic literature review that seeks to explore and analyze how algorithmic personalization introduces biases that have a potential impact on user behavior and perception. In addition, the paper will examine the dangers of search engines algorithms on users and how the human factor plays an important role in introducing explicit and implicit biases in search engine personalization. To answer the research questions on the impact of algorithmic personalization on users and how these algorithms introduce biases, the paper will source recent literature from Google Scholar and use the 2020 Netflix documentary “The Social Dilemma” as a basis for this study.

Literature review

Background

The functional principle of search engines and social media platforms is personalization or creating content tailored to a specific user preference. While these platforms may be impartial in the content they provide, they host a variety of features specifically designed to increase engagement or user interaction with the platform. According to Goldman (2008), people are mistaken when they assume that search engine information is partial without bias. However, like any other company that seeks to generate and maximize profits, search engines control the user experience in a phenomenon described as “search engine bias.” The recently released Netflix documentary “The social dilemma” is a statement of this phenomenon. The social dilemma explores how Social media giants, including Facebook, Twitter, Instagram, and YouTube, manipulate internet users using algorithms that encourage addiction to their platforms (Netflix, 2020). The documentary accurately shows how these platforms collect personal data used to target users with persuasive advertisements. In addition, the documentary reveals how social media companies encourage persuasive technology that modifies human behavior to keep them addicted for economic purposes, a situation described by behavioral psychologists as “positive intermittent reinforcement” (Saura et al., 2021). While most users trust highly ranked links to queried content, many studies have demonstrated that search engine algorithms that filter and present information have a significant impact on user experience. Although powerful and convenient for providing user-specific data, these algorithms raises various concerns. Studies have established that they generate biased and discriminatory advertisement based on race, gender, and show users different prices for the same product, and even distort ratings to benefit low-rated products and services (Datta et al., 2015; Eslami et al., 2017; Kay et al., 2015). These issues have made governments, organizations, and researchers focus on “auditing algorithms” (Metaxa et al., 2021), which attempt to explore how algorithmic systems introduce biases, especially when the content is discriminatory and misleading users.

How Personalization Algorithms Work

Algorithms have become ubiquitous in virtually every aspect of life; they have optimized everything, improved healthcare, and propelled the world into the 4th industrial revolution. However, many experts worry that algorithms can put too much power and control on the hand of a few corporations, minimize individual choices, reinforce, and introduce new biases, and erode social fabrics. At the TEDGlobal conference, Kevin Slavin described algorithms as “math that computers use to decide stuff” that has infiltrated every aspect of life. According to Slavin, algorithms are control technologies used to manipulate users and shape society through media and information systems by constantly modifying content and information (McKelvey, 2014). In every aspect of life, people use search engines to find information about a particular topic, product, issue, or event. As a result, media companies have learned to collect user data and exploit it to generate ads that target a specific-user preference. To achieve this, search engines and social media platforms use algorithms to constantly rank user-generated content. The more queries a topic receives, the highly ranked the results. For instance, the most popular search engine globally — Google — uses the PageRank algorithm, which combines search histories and geographical locations to provide users with more relevant search results. The Google search algorithm employs real-time variables dependent on both involuntary and voluntary user queries. These variables include the number of queries, number of clicks on result links, how many times keywords appear, or number of references by other credible sources. In turn, the algorithm uses this data to automatically determine the order of pages in search results and influence how users perceive, analyze, and understand the content (Dixit et al., 2017). For example, Google maps use advanced personalization algorithms that combine real-time traffic, information collected from user devices, and historical trends to recommend routes to specific users, influencing traffic patterns.

Pioneered by Google, Social media platforms have embraced personalized search intended to determine exactly what users want. These search systems use mathematical algorithms to filter results based on the number of queries, clicks, and links to a particular site. The higher the number of clicks or links to a site, the higher the site is ranked on the page. Personalized search goes beyond general information about people, topics, or events, focusing on information unique to an individual. Search systems personalize their results by considering several variables such as personal preferences and demographic characteristics.

Moreover, these algorithms generate user-based content based on past browsing histories and interaction with web pages (Abri et al., 2020). Even though this phenomenon may appear new, traditional media utilized personalization to expand the market and satisfy their customers. For instance, sections of newspapers, Radio, and Television channels ran topics or Ads dedicated to a particular individuals, demographics, or consumers. Personalization in traditional media channels provided marketers with an effective platform for reaching their customers and ensuring greater satisfaction. However, advancement in digital technologies such as big data and analytics has made personalization more dynamic and detailed. For instance, personalization algorithms can collect important information about a user purely based on their geographical location (Rathod & Deshmukh, 2017). This information is important in personalization. Moreover, current advancements in technologies not only help in a better collection of user data but also facilitate personalization.

The ability of these technologies to collect user information has drawn significant concern, especially relating to personal privacy. In addition, social media platforms have faced many scandals and public condemnation for spreading misinformation and promoting bias for economic reasons. For instance, “The Social Dilemma” raises concerns of how these platforms collect and exploit user data to maximize their profits by modifying the perception and behavior of billions of people worldwide.

Bias in Web Search

In recent years net search neutrality has been a subject of intensive debate. Net search neutrality is based on the principle that search results should be impartial, comprehensive, and primarily based on their relevance in relation to the typed keyword. This means that the search engine should respond to the user query by providing the most relevant content or links available in the provider’s domain without skewing the order of the results, devaluing links, or introducing bias in any way (Bostoen, 2018). With increasing calls for search neutrality, several studies have increased their attention to web search engines and their potential to introduce bias (Bonart et al., 2019). This attention results from mounting concerns than popular search engines search as Google skews rank results to favor some links and websites over others. For instance, many insist that Google alter search engine results to give a higher rank order to its services highly compared to other competing service providers. Search engines have been implicated in reducing the visibility and revenue of specific links and sites by voluntarily lowering their corresponding links’ ranking. Others, such as Google, have been accused of boosting the ranking of closely related links for economic gain (Maille et al., 2021). Search engines have been accused of manipulating organic results by skewing the ranking order of pages to favor themselves or other sites they are financially closely associated. This issue of biasing search results and manipulating ranking on pages was first raised in 2009 by “Adam Raff, co-founder of the price-comparison company Foundem”, accusing google of ranking his company lowly in comparison to Google’s services (Maille et al. 2021). To this end, search neutrality endeavors to ensure media companies do not manipulate, alter, or limit a user’s access to services on the internet by ensuring that search results are organic. This Means search engines would respond based on the relevance of developments concerning query keywords instead of sponsored results.

Other than manipulating search results for economic gains, scholars have examined discriminatory bias based on gender and race. For instance, some research results racially discriminate against people of color by providing content that mistakenly compares them to apes. In another example, Baeza-Yates (2018) demonstrated how web searches introduce geographical bias by highly ranking content from western countries more than sites from other countries. In addition, images portrayed from African countries (impoverished neighborhoods, famine, and violence) contrastingly from Western countries’ images. Search engines have been accused of reinforcing gender stereotypes and racism. Most individuals attribute sexist and racist search results with system failures or glitches with general search patterns; however, in her book “Algorithms of Oppression: How Search Engines Reinforce Racism,” based on personal experiments with Google search and available literatures on search engines, Safiya U. Noble insist that this is a deliberate problem that reflects not only the sexist and racist biases of the search engine designers but also the biases of the search engine users.

Moreover, recent research has examined the political impact of personalized search systems, especially during election periods. These studies indicate that biased search engine results can influence voting decisions (Epstein et al., 2017). The impact of search engines on politics has garnered significant interest. More users obtain their political news from internet searches and trust these sources more than traditional media channels. For instance, in one line of study, researchers carried out studies to investigate the impact of political bias introduced by search engine research on individual voting preferences. In their research, Epstein and Robertson discovered that by altering search engine research top-ranking political information, could influence undecided voters to change their voting preferences by 20 percent or higher — calling this scenario as “search engine manipulation effect” (Epstein & Robertson, 2015). This is in line with other studies that show that search engines are not impartial but have embedded features that promote some values and perceptions over others.

A complementary line of research has focused on the impact of search engine personalization and different results obtained by different individuals for the same search query. For instance, various variables such as demographic and user geolocation can influence the order of results (Kulshrestha et al., 2019). A similar study established that turbulent situations such as mass and school shootings influenced users’ information-seeking behavior (Koutra et al., 2015), where users use search engines to get information that supports their views. Moreover, several scholars suggest that public assumptions that search engine results and data are reliable and impartial can cause substantial harm. For instance, mounting evidence indicates that accounts associated with the Russian government bought ads related to the 2016 election on Google. Although the goal of these ads remains elusive, they suggest that foreign entities could actively manipulate search engine results to influence election outcomes.

Bias in Social Media Search

Worldwide, social media platforms have become the primary sources of news. More and more users rely on these platforms, such as Facebook and Twitter, to seek information on current topics and trends. However, a growing concern is that users are exposed to content that is unreliable, conspiracy theories, pseudo-science, propaganda, and fabricated news reports. For instance, a major concern by governments and public health officials in the rapid spread of misinformation and disinformation surrounding the current COVID-19 pandemic and vaccination programs (Bridgman et al., 2021). Social media disinformation has led to the rapid spread of many conspiracy theories and misconceptions about the corona pandemic that has hampered containment measures and vaccination campaigns designed to combat the pandemic. Moreover, the fact that misinformation spreads so easily and quickly indicates that designers and algorithms behind social media platforms are vulnerable to manipulation. Consequently, they have been much debate about the impact of social media platforms on the nature of content that users consume. While some studies have envisioned the importance of these platforms in forging democratic liaisons with individuals with varied political views (Xenos et al., 2014), others have warned that social media use introduces new biases and reinforces existing ones (Liu and Weber 2014). As a result, many studies (e.g., Garimella et al., 2016; Coletto et al., 2017) have been dedicated to investigating controversial subjects and controversies on the internet.

Social media platforms use a variety of self-learning programs to improve their services and provide users with a unique experience. To achieve this, these platforms are embedded with algorithmic tools in their designs that collect and process users’ personal data to curate and customize on the internet. For example, targeted advertisement, personalized social media feeds recommendation systems, and search filter systems enhance user experience (Kozyreva et al., 2021). Although many of these personalized services are harmless and facilitate a greater user experience (e.g., targeted advertisement or product recommendation), others reinforce exiting biases by challenging open and transparent ideas and encouraging political partisanship (Mazarr et al., 2019). For example, some studies have shown that political discussions on Twitter are highly prejudiced, where users are only exposed to one-sided political views via their social network (Kulshrestha et al., 2019). These results are consistent with other findings showing that social media users only want to communicate with people who share their views and ideologies (Kulshrestha et al., 2019). Users cannot engage in constructive discourse with users with different ideologies. For example, in political discourse, Karthik et al. (2020) noted that “… the platform [Twitter] became an effective tool to amplify certain talking points which are intended to elicit a reaction from the individuals as against a well-thought-out response. This, in turn, led to Twitter being dominated by groups that act in a concerted fashion, many of which are politically and culturally rooted” (para. 2).

There is significant worry that personalized political content bearing untrue information and conspiracy theories from foreign entities influenced both the 2016 U.S. presidential elections and the Brexit referendum (Persily, 2017; Mora-Cantallops et al., 2021). Moreover, social medial algorithms can augment conspiracy theories, misleading content, discriminatory content, and content that can fuel political extremism and radicalization (Ben-David et al.,2016; Tao & Fisher, 2021; Almoqbel et al., 2019). In recent years, there have been increasing concerns about the impact of combining opinion dynamics and algorithm filtering on social media platforms. Many believe this technique has amplified the spread of false and misleading content, especially regarding the COVID-19 pandemic and governments’ containment and eradication measures. This reinforces misconceptions, conspiracy narratives, and dangerous beliefs about the pandemic (Enders et al., 2021) and potentially upend efficient containment and collective responses.

For social media platforms, personalization algorithms — data privacy and transparency — are the principal reasons for concern (Kozyreva et al., 2021). These platforms rely on a vast amount of personal data to provide customized services and a greater user experience. Personal data lays the groundwork for the digital landscape, where service providers exploit behavioral data and patterns collected either directly or by third parties for monetary gain (Zuboff, 2019). The enhanced collection of personal behavioral data allows artificial intelligence algorithms to deduce more details than users plan to share, such as personality traits, sexual orientation, and political views (Glover & Mark, 2017; Hinds & Johnson, 2019). Exploiting behavioral and demographic data in targeted advertisement may fuel discriminatory bias — by preferentially targeting individuals from marginalized social groups (Datta et al., 2018, Kozyreva et al., 2021). In her book, “Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy,” Cathy O’Neil emphasize that algorithms using predictive analytics would negatively impact vulnerable groups (e.g., the poor) by using predictive analytics using behavioral prediction algorithms for hiring practices. On a similar note, many have accused the 2016 Trump presidential campaign of using and spreading biased political messages on Facebook and Twitter to prevent more than 3.5 million African Americans from voting (Sabbagh, 2020). On the same line, Ribeiro et al. (2019) suggest that the Russian Internet Research Agency used politically biased ads on social media platforms to create social discord before the 2016 United States presidential elections.

The Social Dilemma and Algorithm Manipulation

The 2020 Netflix documentary “The Social Dilemma” raises several legitimate concerns about the negative implications of social media technology on society. In addition, the documentary highlights how social media platforms harvest and monetize personal data via behavioral prediction algorithms. The documentary focuses on how social media giants exploit personal data from user interaction with the platforms to influence the type of content provided. By using behavioral prediction technologies, giant technology companies can provide content tailored to make users addicted in order to generate more profits. According to Shoshana Zuboff, author of “The Age of Surveillance Capitalism,” social media platforms are a new marketplace that never existed perform and exclusively trade in the human future (Netflix, 2020). And to achieve this, it is imperative to collect vast amounts of personal data.

Furthermore, Zuboff insight that ensuring that their ads are successful and addictive has propelled the internet market to be the wealthiest industry. In addition, Jeff Seibert, former Twitter executive, insists that every activity online is being watched, monitored, and tracked, and every action is measured and exploited for economic gain (Netflix, 2020). In addition, the documentary sheds light on how technology giants build prediction models to collect and manipulate user data.

While elements of user manipulation are present on traditional media, the nature of social media platforms and the ability to collect vast amounts of data amplify manipulation to even more harmful levels. Social media platforms dictate what users see and shape their opinions and attitudes. As Kak (2018) explained, “These algorithmic choices and business dynamics shape our exposure to opinion and fact and the range of sources from which we get them. The manipulation of their preferences may not interfere directly with an individual’s options, but, …perverts the way that person reaches decisions, forms preferences, or adopts goals. This distortion, too, is an invasion of autonomy” (para. 6). Kak’s view is consistent with the views raised by the views raised by the interviewees of the Netflix documentary that if left unchecked “some algorithms run the risk of replicating and even amplifying human biases, particularly those affecting protected groups” (Netflix, 2020). Furthermore, the documentary points out how social media platforms rely on the behavior of users online to determine the content they produce; they closely watch, track, monitor, and measure every action that users interact and engage with by liking, commenting, and sharing.

For instance, Troll farms, corporations that disseminate provocative content, exploit this by duplicating high-engaging- viral content as their own (Netflix, 2020). For example, Cambridge Analytica exploited Facebook data to conduct widespread political manipulation as Kak (2018) notes, “Cambridge Analytica used deceptive means (illegal in several countries) to gain access from Facebook to granular information about more than 50 million Americans and deployed it to tailor political messaging for Donald Trump’s (eventually successful) presidential campaign” (para. 8). There have been increasing calls to regulate political advertising by social media platforms. Social media platforms are implementing efforts that address personalization and individual targeting to regulate divisive political messages. For example, Twitter and Facebook have recently introduced restrictive measures against political advertising (Twitter, 2021; Facebook, 2021). However, Ali et al. (2021) study show that Facebook amplifies political polarization by using delivery algorithms that insist on relevance, thus aligning with a user’s political preferences and ideologies.

Pros and Cons of Algorithmic Personalization

The global business environment has been transformed forever with the advent of personalization algorithms. While traditional methods took a broader approach in their advertising, hoping their ads may reach the target audience. Advancement in technologies such as machine learning, big data analytics, and artificial intelligence has changed how companies market their products and services (Anshari et al., 2019) . Internet users now expect to receive content and information tailored specifically for them. Companies have adopted personalization to expand their market size and ensure greater customer satisfaction. This has been achieved in part by search engines embedded with algorithms that collect and analyze vast amounts of personal user data (Dixit et al., 2019). Using artificial intelligence, these algorithms can collect behavioral data, curate, and provide content that is customized to a specific user. Search engines use user data to offer a greater customized experience. In addition, the learning algorithm tailors messages, advertisements, and promotions that are based on individual preferences.

To the user, personalization algorithms filter information and ensure greater customer experience by providing relevant content suggestions, individual targeted information, product and service recommendation, and customized responses. The use of informational filter algorithms facilitates opinion diversity by bringing like-minded users together to share and reinforce their beliefs and ideologies (Haim et al., 2018). This is also beneficial in protecting internet users from radical and fake content by ensuring that they get credible and verifiable information by enclosing them in what experts refer to as “filter bubbles” (Haim et al., 2018). Moreover, personalization improves the quality of data by matching users’ queries with past browsing histories. This helps to reduce the time wasted on irrelevant content and sites. For example, the Google search engine stores user data and matches it with new queries to ensure that users quickly get the information they seek.

While personalization algorithms have many potential benefits, experts have raised several concerns and arguments against their implementation. The most prominent argument against these algorithms is the potential for bias, especially when the information is misleading or discriminatory. In addition, some scholars argue that personalization limits the amount of relevant information that a user can get exposed to; this is because only content that aligns with the user’s preferences and search history queries is displayed on the search result page. Another concern is the manipulation of search engine algorithms by giant media companies for political gain. For example, “The Social Dilemma” raises the alarm about how media companies exploit personal data collected by search engine algorithms for monetary gain by making users more addicted and engaged to the platforms (Datta et al., 2014). Other studies have raised data privacy concerns, where third parties exploit personal data to target users and spread divisive political messages, as evident from the 2016 presidential elections.

Conclusion

Personalization algorithms have become ubiquitous in every aspect of life; they shape and influence every part of life and determine what kind of information or content users are exposed to. However, with the incessant reliance on search engines and social media platforms, experts have raised concerns about the negative impact on individuals and society at large. For example, The 2020 Netflix documentary “The Social Dilemma” sheds light on how social media giants such as Google, Facebook, Twitter, YouTube, and others, use personalized algorithms to collect and manipulate user data for economic data. The digital ecosystem is reliant on vast amounts of data. Personalized algorithms have been optimized to collect, analyze, and exploit personal data to drive competition and ensure a greater customer experience. To achieve this, media companies monitor, collect, and measure every action a user performs on the internet. This has raised privacy concerns, with experts calling on increasing regulation to restrict the amount of personal information that technological giants can collect. In addition, experts have warned that personalized services introduce new biases and reinforce existing ones. For instance, a significant concern is that users are exposed to unverifiable or divisive content that may fuel social discord, political extremism, and radicalization.

As more users are relying on search systems to obtain information about current events, topics, and news, biases can negatively shape users’ opinions and perceptions. Therefore, regulatory bodies and media companies must design effective mechanisms to minimize potential biases and identify areas where the biases may be misleading to the user.

References

Abri, S., Abri, R., & Çetin, S. (2020). Estimating personalization using Topical User Profile. In KDIR (pp. 145–152).

Anshari, M., Almunawar, M. N., Lim, S. A., & Al-Mudimigh, A. (2019). Customer relationship management and big data enabled: Personalization & customization of services. Applied Computing and Informatics, 15(2), 94–101.

Baeza-Yates, R. (2018). Bias on the web. Communications of the ACM, 61(6), 54–61.

Bridgman, A., Merkley, E., Loewen, P. J., Owen, T., Ruths, D., Teichmann, L., & Zhilin, O. (2020). The causes and consequences of COVID-19 misperceptions: Understanding the role of news and social media. Harvard Kennedy School Misinformation Review, 1(3).

Ben-David, A., & Fernández, A. M. (2016). Hate speech and covert discrimination on social media: Monitoring the Facebook pages of extreme-right political parties in Spain. International Journal of Communication, 10, 27.

Bostoen, F. (2018). Neutrality, fairness or freedom? Principles for platform regulation. Principles for Platform Regulation (March 31, 2018). Internet Policy Review, 7(1), 1–19.

Datta, A., Tschantz, M. C., & Datta, A. (2014). Automated experiments on ad privacy settings: A tale of opacity, choice, and discrimination. arXiv preprint arXiv:1408.6491.

Dixit, A., Rathore, V. S., & Sehgal, A. (2019). Improved Google Page Rank Algorithm. In Emerging trends in expert applications and security (pp. 535–540). Springer, Singapore.

Epstein, R., Robertson, R. E., Lazer, D., & Wilson, C. (2017). Suppressing the search engine manipulation effect (SEME). Proceedings of the ACM on Human-Computer Interaction, 1(CSCW), 1–22.

Eslami, M., Karahalios, K., Sandvig, C., Vaccaro, K., Rickman, A., Hamilton, K., & Kirlik, A. (2016, May). First I” like” it, then I hide it: Folk Theories of Social Feeds. In Proceedings of the 2016 cHI conference on human factors in computing systems (pp. 2371–2382).

Enders, A. M., Uscinski, J. E., Klofstad, C., & Stoler, J. (2020). The different forms of COVID- 19 misinformation and their consequences. The Harvard Kennedy School Misinformation Review.

Grover, T., & Mark, G. (2017, September). Digital footprints: Predicting personality from temporal patterns of technology use. In Proceedings of the 2017 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2017 ACM International Symposium on Wearable Computers (pp. 41–44).

Goldman, E. (2008). Search engine bias and the demise of search engine utopianism. In Web Search (pp. 121–133). Springer, Berlin, Heidelberg.

Haim, M., Graefe, A., & Brosius, H. B. (2018). Burst of the filter bubble? Effects of personalization on the diversity of Google News. Digital journalism, 6(3), 330–343.

Hinds, J., & Joinson, A. (2019). Human and computer personality prediction from digital f ootprints. Current Directions in Psychological Science, 28(2), 204–211.

Kak, A. U. (2018). Cambridge Analytica and the political economy of persuasion. Econ Polit Wkly, 53(20). www.epw.in/engage/article/cambridge-analytica-and-political-economy- persuasion.

Kay, M., Matuszek, C., & Munson, S. A. (2015, April). Unequal representation and gender stereotypes in image search results for occupations. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems (pp. 3819–3828).

Kulshrestha, J., Eslami, M., Messias, J., Zafar, M. B., Ghosh, S., Gummadi, K. P., & Karahalios, K. (2019). Search bias quantification: investigating political bias in social media and web search. Information Retrieval Journal, 22(1), 188–227.

Koutra, D., Bennett, P. N., & Horvitz, E. (2015). Events and controversies: Influences of a shocking news event on information seeking. In Proceedings of the 24th international conference on world wide web, WWW ’15, International world wide web conferences steering committee, Republic and Canton of Geneva, Switzerland (pp. 614– 624). https://doi.org/10.1145/2736277.2741099.

Kozyreva, A., Lorenz-Spreen, P., Hertwig, R., Lewandowsky, S., & Herzog, S. M. (2021). Public attitudes towards algorithmic personalization and use of personal data online: evidence from Germany, Great Britain, and the United States. Humanities and Social Sciences Communications, 8(1), 1–11.

Mazarr, M. J., Bauer, R. M., Casey, A., Heintz, S. A., & Matthews, L. J. (2019). The emerging risk of virtual societal warfare: Social manipulation in a changing information environment. RAND Corporation.

Mustafaraj, E., Lurie, E., & Devine, C. (2020, January). The case for voter-centered audits of search engines during political elections. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency (pp. 559–569).

Metaxa, D., Park, J. S., Robertson, R. E., Karahalios, K., Wilson, C., Hancock, J., & Sandvig, C. (2021). Auditing Algorithms: Understanding Algorithmic Systems from the Outside In. Foundations and Trends® in Human–Computer Interaction, 14(4), 272–344.

Mora-Cantallops, M., Sánchez-Alonso, S., & Visvizi, A. (2021). The influence of external political events on social networks: The case of the Brexit Twitter Network. Journal of Ambient Intelligence and Humanized Computing, 12(4), 4363–4375.

McKelvey, F. (2014). Algorithmic media need democratic methods: Why publics matter. Canadian Journal of Communication, 39(4), 597–613.

Netflix. (2020). “The Social Dilemma”.

Noble, S. U. (2018). Algorithms of oppression. New York University Press.

Pan, B., Hembrooke, H., Joachims, T., Lorigo, L., Gay, G., & Granka, L. (2007). In Google we trust: Users’ decisions on rank, position, and relevance. Journal of computer-mediated communication, 12(3), 801–823.

Persily, N. (2017). The 2016 US Election: Can democracy survive the internet?. Journal of democracy, 28(2), 63–76.

Rathod, P., & Desmukh, S. (2017, September). A personalized mobile search engine based on user preference. In 2017 IEEE International Conference on Power, Control, Signals and Instrumentation Engineering (ICPCSI) (pp. 1136–1141). IEEE

Sabbagh, D. (2020). Trump 2016 campaign ‘targeted 3.5 m black Americans to deter them from voting’. The Guardian. www.theguardian.com/us-news/2020/sep/28/trump-2016- campaign-targeted-35m-black-americans-to-deter-them-from-voting.

Saura, J. R., Palacios-Marqués, D., & Iturricha-Fernández, A. (2021). Ethical design in social media: Assessing the main performance measurements of user online behavior modification. Journal of Business Research, 129, 271–281.

Tao, X., & Fisher, C. B. (2021). Exposure to social media racial discrimination and mental health among adolescents of color. Journal of youth and adolescence, 1–15.

Yusuf, N., Yunus, M. A. M., & Wahid, N. (2019). A comparative analysis of web search query: informational vs navigational queries. Int. J. Adv. Sci. Eng. Inf. Technol, 9(1), 136–141.

Xenos, M., Vromen, A., & Loader, B. D. (2014). The great equalizer? Patterns of social media use and youth political engagement in three advanced democracies. Information, Communication & Society, 17(2), 151–167.

--

--