Is it the Algorithms or Us?
The 2018 elections have highlighted the dramatically divergent partisan realities of Americans. These elections are, however, but a point in a trajectory of increasing partisanship in the United States that dates back at least a generation. Some have suggested that this polarization has been driven by the rise of the Internet; that our algorithmically curated reality has created a “filter bubble” — sealing us off from the information and people that we don’t agree with.
While social media has been the dominant focus of filter bubble concerns to date, basic web search plays a more central role in the online information ecosystem. This is evident from recent surveys of web usage, which have shown that 86% of people use search engines on a daily basis, and that more people obtain their news from web search than social media. However, more concerning here, is a line of experimental research that has shown how search engines can substantially shift undecided voters’ preferences under certain conditions. Together, these findings suggest an urgent need for more research on partisanship in web search, especially on Google, the world’s most popular search engine.
How can we peer into the black box of search? Algorithm audits provide one effective tool for doing so. This method involves systematically manipulating the inputs into online systems like web search — effectively treating the algorithm as the subject — enabling the researcher to sketch an outline of the system’s responses to a given array of stimuli. In early 2017, around the time of Donald Trump’s inauguration, we put this method into practice by conducting an algorithm audit of Google Search.
To explore the filter bubble hypothesis within Google Search, we recruited 187 participants to complete a survey about their political preferences and install a browser extension that we had designed. Once activated, the extension conducted a series of Google searches from participants’ computers. For each query it submitted to Google, it collected two search engine results pages: one personalized and one non-personalized (the latter was conducted in Chrome’s Incognito mode). In total, we conducted over 15,000 of these paired searches over the course of four weeks.
After developing and validating a measure to quantify the partisanship of web domains, we found no evidence for the filter bubble hypothesis in web search. That is, we found no significant or substantial differences in the partisanship between the personalized and non-personalized results that our participants received, despite prior research documenting some degree of personalization in Google Search. Examining only the personalized results, we also found no evidence that the partisanship of Google search results significantly differ for self-reported Democrats and Republicans (i.e. Democrats did not receive more left-leaning results than Republicans or vice versa).
Although we did not find evidence supporting the filter bubble, we did find some interesting patterns of partisanship within the presentation of search results. For example, we found large differences in the partisanship of search results by query. That is, the partisanship observed in search engine rankings depends enormously on the queries that produce them (e.g. searches related to the query “conservative” were relatively right leaning, while those related to the query “democrat” were relatively left leaning).
Among the queries we searched, we also found that Google’s embedded Twitter results often featured links to webpages that leaned right relative to the rest of the search results, and these embedded tweets were typically highly ranked. Moreover, these embedded tweets almost always appeared for searches related to the query “Donald Trump.” Given that searches for “Donald Trump” were one of Google’s top-ten searches in 2016 — garnering twice as many as searches as “Hillary Clinton” during the election — Google’s design decision to include tweets in their search results may have amplified the reach of Trump’s Twitter account at a critical time.
Taken together, the findings of our audit suggest three conclusions. First, personalization in web search does not appear to be contributing to filter bubbles. Second, Google’s design decisions can impact the partisanship of web search by amplifying voices from Twitter. Third, query selection is a crucial determinant of the partisanship observed in web search.
These findings redirect our attention to a pressing, critical, and perhaps uncomfortable question. To the extent that online platforms have an impact on public opinion: how much of that impact is due their design and algorithms, and how much is simply due to the way that users exert their agency on them. That is, how much of the problem is due to the way that users select queries? Our findings provide some evidence, at least in the case of Google Search, that it might not be the algorithms. That maybe we’re looking for the answers we agree with, rather than the truth. That maybe it’s not the algorithms. Maybe it’s us.
Robertson, R. E., Jiang, S., Joseph, K., Friedland, L., Lazer, D., & Wilson, C. (2018). Auditing partisan audience bias within Google Search. In Proceedings of the ACM: Human-Computer Interaction, 1(2), Article 148. DOI: 10.1145/3274417. (pdf)