Beyond the Byline: Exploring Bias in Human- and AI-Generated News

UF J-School
CJC Insights

--

In the world of news, artificial intelligence (AI) has been a silent partner for a lot longer than most realize. Narrative Science, which delivered automated sports stories to news outlets, began providing content in 2010, the Associated Press started automating financial stories in 2014 and the Washington Post started using AI tools in 2016 to cover the Olympics and the election.

A research team at the University of Florida College of Journalism and Communications (UFCJC) led by Seungahn Nah, Dianne Snedaker Chair in Media Trust and research director for the Consortium on Trust in Media and Technology, and including doctoral students Mo Chen and Renee Mitson, wondered, as AI is increasingly used to create content, if algorithmic biases existed in the telling of news.

Specifically, they sought to investigate how AI-generated news compared to human-written articles in terms of linguistic features, tone and bias toward gender and race/ethnicity when reporting on the highly charged topics of abortion and immigration.

To answer this question, the researchers collected a total of 2,000 news transcripts from CNN and Fox News, focusing on articles related to abortion and immigration published between 2012 and 2022. They then trained GPT-2 language models on a sample of 500 articles from each topic-network category and generated an equal number of AI-written articles for comparison.

The results revealed some intriguing differences between human and machine-generated news. While AI articles tended to be more focused on the core topic, human-written pieces covered a wider range of themes and perspectives. Linguistic analysis showed that machine-generated news used certain words and phrases more frequently, potentially amplifying biases.

However, the study also found that AI-generated news was not necessarily more biased than human reporting. In fact, machine-written articles contained less biased content related to gender and race/ethnicity compared to their human counterparts, particularly in the case of immigration coverage.

These findings raise questions about the role of AI in journalism and the future of news production. While machines may excel at generating focused, fact-based content, they may also struggle to capture the nuance and context that human reporters bring to their work. The study highlights the need for transparency and accountability in the use of AI tools in newsrooms, as well as ongoing research into the potential benefits and risks of automated journalism.

As the researchers conclude, “The results and their related implications enable us to pose a fundamental question of whether and how AI-generated news may reflect news bias represented in human news (‘algorithmic bias’) or reconstruct human news in distinct ways (‘algorithmic reframing’ or ‘algorithmic reconstruction.’)”

As this study shows, biases can creep into both human and machine-generated journalism. The quest for truth in news is a shared responsibility — one that requires ongoing collaboration between humans and AI in this brave, new world.

The original paper, “Algorithmic Bias or Algorithmic Reconstruction? A Comparative Analysis Between AI News and Human News,” was published in the International Journal of Communication, Vol. 18 (2024).

Authors: Seungahn Nah, Jun Luo, Seungbae Kim, Mo Chen, Renee Mitson and Jungseock Joo.

This summary was written by Gigi Marino.

--

--

UF J-School
CJC Insights

News and insights from the College of Journalism and Communications at the University of Florida (@UF) .