Tracking gender representation across a decade of cable news using AI
As America celebrates the 100th anniversary of the 19th Amendment and the constitutional right for 68 million women to vote, there are many ways to measure how far they have come. According to the 2019 Status of Women in U.S. Media report, “despite some gains, men still dominate in every part of news, entertainment and digital media.” This imbalanced representation of gender can cause or perpetuate gender stereotypes. Consider how much screen-time women get on cable TV news.
The graph below generated by the Stanford Cable TV News Analyzer compares the fraction of time male-presenting and female-presenting¹ individuals appear on screen.
We find female-presenting faces are on screen about 29% of the time. Male-presenting faces are on screen 61% of the time (a 2.1-to-1 male-to-female ratio). There is a steady movement toward gender parity over the decade (the 2.4-to-1 male-to-female screen time ratio in 2010 dropped to 1.9-to-1 in 2020), but the rate of change is slow. These results reinforce prior observations of the under-representation of women in news media.
A deeper look shows all three major cable news channels exhibit similar disparities in the screen-time of male- and female- presenting individuals.
Gender balance of news presenters
Different trends emerge when we limit our screen-time analysis to only news presenters. As shown below, there is a shift toward gender parity from 2010 to 2015. However, this trend reverses in 2015 when it bottoms out at 1.4-to-1. The male-to-female screen time ratio rises to 1.8-to-1 in 2020.
The different channels exhibit different trends in the screen time of news presenters, hosts, and anchors. As shown below, CNN achieved near gender parity in news presenter screen time in the early months of 2012 and again in June 2015. However, CNN has recently become the most gender imbalanced of the three networks (in terms of news presenter screen time). By the end of August 2020, the male-to-female ratio of news presenter screen time on CNN is 2.0-to-1, compared to 1.8-to-1 on MSNBC and 1.2-to-1 on Fox News.
While Fox News now exhibits least gender imbalance for news presenters of all three stations, it exhibits the greatest gender imbalance for individuals who are not news presenters (on air guests, faces in B-roll footage, etc). In 2020 the fraction of screen-time given to female faces who are not news presenters has been significantly lower on Fox News than on CNN and MSNBC (2.7-to-1 on Fox News vs. 2.0-to-1 on MSNBC and CNN).
Using machine learning as a tool for news analysis
The Stanford Cable TV News Analyzer leverages AI techniques to automatically detect faces in cable news video and then estimate the presenting gender of these individuals on a decade of nearly 24–7 news coverage from Fox News, CNN and MSNBC since January 1, 2010. These faces appear in many contexts. They may be faces of news presenters, on-air guests, in B-roll footage or static infographics. We take inspiration from current media initiatives seeking to improve gender representation, like the BBC’s 50:50 project aiming to increase engagement and encourage gender balance as part of their regular editorial conversations. They track gender representation on a monthly basis involving partnerships with 70 news organizations in over 20 countries. GenderAvenger also measures who talks as a means of ensuring women are included in the public dialog. But most news monitoring efforts rely on manually coding video (human labeling of people, topics, etc.). This is a highly labor intensive process, with limitations to the amount of media that can be observed, as well as the variety of labels collected (e.g., counting the number of individuals who appear but not their screen time). Automatic annotation of much larger amounts of video stands to help improve understanding of longitudinal trends in gender representation on the news.
A critical challenge of using automated techniques is the gender estimates made by computational models have errors. For example researchers have shown widely used gender-estimation “classifiers performed worst for darker females” and transgender individuals. Understanding the prevalence and nature of labeling errors (including forms of bias) is important to building trust in analysis results. Our methodology page discusses efforts to assess the accuracy of our face detection and presented gender estimation models. Overall, this classifier agreed with human results 97.2% of the time. It was more accurate when classifying male-presenting faces (98.8%) than female-presenting faces (93.8%).
We acknowledge simplification of presented gender to a binary (male/female) quantity fails to represent many non-binary, transgender and gender non-conforming individuals. Further, an individual’s presented gender may differ from their actual gender identity, causing algorithmic attempts to infer gender from facial appearance to fail for these individuals. Despite these limitations, using technology to estimate binary presented gender labels still provides useful insights into the presentation of cable TV news, and illuminates important biases in the screen time given to female- and male-presenting groups.
Try it out yourself!
Examine how gender is represented in cable news and how your favorite cable news network compares to the competition. For example, how much screen time did female presidential candidates in the 2020 Democratic primary receive? How does female screen time compare on two shows? How does the male-to-female screen time ratio change when different topics are discussed? We invite you to our Getting Started page to learn more about how to measure the content of cable news on your own.
Stanford University Student Research Assistants: James Hong (student lead), Jacob Ritchie, Jeremy Barenholtz, Will Crichton, Daniel Fu, Ben Hannel, Michaela Murray, Xinwei Yao, Haotian Zhang
Maneesh Agrawala: Forest Baskett Professor Computer Science and Director of the Brown Institute for Media Innovation at Stanford University
Kayvon Fatahalian: Assistant Professor of Computer Science, Stanford University
Geraldine Moriba: Journalist, Documentary Filmmaker, Broadcast News Executive, Stanford University Brown Institute for Media Innovation Fellow and Research Scientist, and former John S. Knight Journalism Fellow
Acknowledgments: The Stanford Cable TV News Analyzer is a collaboration between researchers at Stanford University’s Computer Science Department, the Brown Institute for Media Innovation, and the Internet Archive.
¹ Gender presentation (also referred to as gender expression) reflects an individual’s external expression of their gender (through cues such as facial features, makeup, hairstyle, and clothing), which may be different from both their gender identity and/or their birth sex. By attempting to associate individuals with “male” and “female” labels based on their facial appearance, the Stanford Cable TV News Analyzer groups individuals by their gender presentation.