Are Google Ads Really Gender Biased?

Will Rinehart
3 min readAug 29, 2016

In the debate over social media and algorithmic filters, a Carnegie Mellon University study is often cited to support the suspicion that online ads are gender biased. Using an innovative new tool called AdFisher, researchers found that women are served significantly fewer ads than men for jobs paying more than $200,000. Yet, the study has some serious methological issues, giving it an effective sample size of 2. While gender discrimination clearly does exist in society, ad bias doesn’t find firm support in this study.

The study details the ins and outs of the AdFisher tool, built by researchers at Carnegie Mellon University and the International Computer Science Institute. To explore how Google ads and the Ad Settings within Google interact, AdFisher creates an agent from “a fresh browser instance with no browsing history, cookies, or other personalization.” The program then randomly assigns each of these agents to a group and applies a treatment, depending on the topic. Following this, AdFisher then collects ads shown to the browser, which can then be analyzed for differences.

Gender discrimination was one of the first tests conducted. The agent profiles were set to either male or female with one group collecting ads and another simulating an interest in jobs. As reported in MIT Technology Review, “they found that fake Web users believed by Google to be male job seekers were much more likely than equivalent female job seekers to be shown a pair of ads for high-paying executive jobs when they later visited a news website.”

The problem with the study lies in the sites where data was collected. Five different experiments were conducted. Four of them focused on the Times of India web site. The other experiment collected ads served by The Guardian. Effectively, the sample size was 2. And of the five experiments where 22,000 to 43,000 ads were collected, the researchers marked only one as a violation of equality. Just one.

Ensuring your sample matches the larger population of study is among the most important parts of any study, and sadly this one doesn’t get it right. Google segments ad content by web site, allowing advertisers to reach people of specific demographics. Selecting two web sites means that we are likely being served content tailored for the audiences of those specific web sites, neither of which cater to the United States. So the population studied includes just the visitors to those two sites and thus isn’t represenative.

Both The Guardian and the Times of India rank high for news sites, and are within the top 15 of Alexa rankings. But for all web sites, The Guardian ranks at 151 and the Times of India is at 131. If nothing else, the analysis leaves out the number 1 most visited site, and the central focus of the report, Google itself. Ideally, we would want to randomly select a web site based on the total amount of ads served by Google Ads, and we probably want to focus on the total US sites, which would require a little more legwork.

Gender discrimination clearly exists in many facets of American society, including our economic structures. That isn’t up for debate. What is up for debate is just how this is reflected in online life. Given the serious problems in methodology, we shouldn’t be citing this report.

--

--

Will Rinehart

Senior Research Fellow | Center for Growth and Opportunity | @WillRinehart