Quantity vs Quality: How X/Twitter’s Algorithm Shapes Our Consumption of News

Stephanie Wang
ACM CSCW
Published in
5 min readOct 16, 2024

This blog post covers the paper “Lower Quantity, Higher Quality: Auditing News Content and User Perceptions on Twitter/X Algorithmic versus Chronological Timelines” by Stephanie Wang, Shengchun Huang, Alvin Zhou and Danaë Metaxa. The paper presents a three-week sociotechnical audit of X/Twitter’s timeline algorithm to investigate how the algorithm curates news on the platform while also tracking how user behaviors and perceptions change in response. The work will be presented at The 27th ACM Conference on Computer-Supported Cooperative Work and Social Computing (CSCW 2024).

Image of a man reading a newspaper.
Image Source: Pexels

Now more than ever, people are relying on social media to inform them of what is happening in the world. Today, half of U.S. adults get their news from social media, and 52% of X/Twitter users regularly turn to this particular platform for their news updates. While digital dissemination of news can make information more accessible, turning to algorithmically-mediated platforms presents its own set of challenges, as algorithms may amplify exposure to radical or unreliable information, reinforce people’s selectivity bias for like-minded political content, and thereby undermine the informed public necessary for civic participation in democratic societies. Given this backdrop, this work investigates the effect of X/Twitter feed ranking algorithms on 1) the curation of political news on the platform, and 2) differences in user behavior and perception driven by different algorithmic environments.

Study design

We follow the sociotechnical audit (STA) method defined in prior work. By combining a traditional algorithm audit with a user audit, we seek to understand how changes to this algorithmic system changes its outputs, alongside the impact algorithmic content subsequently has on user perceptions and behaviors.

We deployed a custom-built system consisting of a browser extension and web application that tracked participants’ X/Twitter usage and enacted in-situ interventions to the homepage over the course of three weeks. In the first week, we passively tracked all tweets loaded in users’ browsers; then in the second week we enacted an intervention on users’ X/Twitter homepages to restrict them to either the algorithmic (For You) or chronological (Following) timeline (randomized for each user). In the third week we flipped this condition for each user. At the end of each week we deployed a survey asking participants about their satisfaction with X/Twitter that week, their perceptions of the platform and its content, and their perceptions of others’ experiences on the platform.

A figure showing the phases of the study, there are three phases each a week long: observation, first intervention and second intervention. Between each phase is a survey.
Full timeline of the study. In the observational phase users viewed X/Twitter timelines as usual. In the first intervention we randomly assigned users into the algorithmic or chronological condition. In the second intervention we flip each user’s condition. We deployed surveys at the outset of the experiment and at the end of each phase.

Results

Our audit ran in late 2023, collecting user-centered metrics (self-reported survey measures) and platform-centric metrics (views, clicks, likes, retweets) for 243 study participants, along with over 800,000 tweets seen by those participants.

Algorithm audit results. We found that X/Twitter’s algorithmic timeline had a moderating effect in shaping the news content that users encountered on the platform, compared to users’ chronological timelines. While participants were exposed to less news in the algorithmic condition, the news sources were significantly less extreme and less ideologically congruent to users and slightly more reliable compared to the news users were exposed to in the chronological timeline condition.

Three panels showing results of mixed-effects models: weighted congruence and extremity is higher in the chronological timeline compared to algorithmic. Reliability is higher in algorithmic timline compared to chronological.
Algorithmic effects on news curation. Users assigned to the algorithmic timeline were exposed to tweets with less congruent and less extreme news links. The effect on reliability was not significant.

Impact on user behavior. Behavioral evidence revealed that participants saw significantly more tweets and engaged (liked, commented and retweets) more with tweets in the algorithmic condition, suggesting that the recommendation algorithm was better at holding user attention. However, while users engaged more with tweets containing news links in the algorithmic timeline, they also saw less news in this condition.

Four panels showing results of mixed-effects models: Number of tweets viewed, number of tweets engaged with, and number of news tweets engaged with is higher in the algorithmic condition compared to chronological. Number of news tweets viewed is higher in the chronological condition.
Algorithmic effects on user behavior. Users assigned to the algorithmic timeline used and engaged with X/Twitter more, but were exposed to fewer news links. The effect on their engagement with news tweets was not significant.

Impact on user perceptions. Although users’ perceptions of the differences in ideological bias of the news they were exposed to aligned with the changes in their timelines, attitudes towards broader concepts such as overall platform bias and trust in news remained the same. Notably, users were significantly less satisfied with the user experience in the intervention periods compared to the observation period, suggesting that they disliked the loss of control caused by being restricted to only one of the two timelines.

Contrary to public concerns about X/Twitter, the algorithmic timeline did not prioritize engagement at the expense of news quality; instead engagement and quality were both higher compared to the chronological feed. And although users were sensitive to differences in the ideological bias of the news they are exposed to (and despite our intervention making substantial changes to their Twitter/X content), most of their self-reported perceptions and attitudes largely remained stable throughout the experiment. Said differently, there is a gap between user information consumption and their perceptions of that information, suggesting that attitudes are more resilient to interventions in algorithmic content.

Take-aways

Building on previous research across disciplines such as political science, communication, and computer science (particularly CSCW), we broaden our understanding of the social media landscape concerning news by incorporating user experiences in an audit of algorithmic content on X/Twitter.

Our study demonstrates a method for collecting ecologically valid exposure data (i.e. what real users are exposed to during natural platform use) without internal access provided by the platform. This is a promising avenue for independent social media researchers. Our results contribute a richer understanding of human-algorithm interaction — that while the system may perform well technically, users may remain stubborn in their beliefs about the system. These results suggest a need to re-calibrate technical performance and user perceptions so they match more closely, beginning with centering and studying the users.

For a complete description of our methods and results, we invite you to read our full paper:

Stephanie Wang, Shengchun Huang, Alvin Zhou, and Danaë Metaxa. 2024. Lower Quantity, Higher Quality: Auditing News Content and User Perceptions on Twitter/X Algorithmic versus Chronological Timelines. Proc. ACM Hum.-Comput. Interact. 8, CSCW2, Article 507 (November 2024), 25 pages. https://doi.org/10.1145/3687046

--

--

ACM CSCW
ACM CSCW

Published in ACM CSCW

Research from the ACM conference on computer-supported cooperative work and social computing

Stephanie Wang
Stephanie Wang

Written by Stephanie Wang

PhD Student at Penn, interests in HCI and computational social science. Twitter: @stephanietwang

No responses yet