Examining Google and Instances of Algorithmic Biases: The Ethical Pitfalls

Jordan Willis
6 min readNov 16, 2023

--

In 2015 AdFisher, a tool built by researchers from Carnegie Mellon University and the International Computer Science Institute, investigated of Google’s targeted ads on third-party websites (Simonite, 2015). The findings highlighted how Google’s ad-targeting algorithm showed an emerging gender bias and pushed forward discrimination.

In the study, the algorithm behind Google’s targeted ads was more likely to show male job seekers adverts for a high-paid executive job than female job seeker counterparts (Simonite, 2015). Whilst gaining media attention, this study started to highlight the issues with algorithmic decision-making and the concerns this could have on the ad ecosystem and society at large.

Since 2015, we are still in a world where Google’s algorithmic biases are affecting our digital experiences. The following article looks at examining the implications of ad targeting and raising awareness of how algorithmic biases are potentially harming advertising practices.

Understanding Algorithm Biases

One way in which algorithm biases can be looked at is by looking at how the data behind the algorithm is trained. When looking at machine learning, algorithms must be trained on specific data sets to identify what the correct output should be for people (Turner Lee, et al., 2019). From this training data, the algorithm can then make predictions about other people and what the correct outputs are for them, but biases can be generated from flawed unrepresentative data or incomplete training (Turner Lee, et al., 2019). So, if the data that an algorithm is based on is inherently biased, then the algorithm will emulate those biases.

Secondly, the controversy around algorithmic biases and their impact is worsened due to the transparency of the data. Companies like Google rely on algorithms to uphold their businesses, so maintaining the secrecy of how they work is important. As the ‘information society’ that we live in continues to become integral to how businesses view us, our secrecy is being unfolded, yet the companies that use our data seem to be upholding the secrecy of how their platforms operate (Broeders, 2016). Scholars such as Roselli et al. (2019) outline that even if the data is private, it should be publicly available to ensure the governance of specific data can be examined.

Turning Pixels into Prejudice

As algorithms are continually used to automate ad targeting, there are concerns that online ads will continue to produce discriminatory consequences. Lambrecht & Tucker’s (2019) study on Google’s ads promoting jobs in the Science, Technology, Engineering, and Math (STEM) fields shows that fewer women were exposed to these ads than men. STEM fields have historically been underrepresented by women and biased algorithms will only reinforce these harmful stereotypes.

Additionally, digital advertising expenditure has grown by 56% since the pandemic in the UK (Bold, 2023). So, as the digital world becomes more pivotal in how we view the real world, biased algorithms could change perceptions in the physical world as well. In Lambrecht & Tucker’s (2019) findings, the ads that were proven to be biased were also approved and in line with ‘employment discrimination’ laws, showing how policy changes may need to be changed to keep up with algorithm biases. But this may not be a fully effective solution.

Adverse effects of AdSense

So, who or what decides where and when an ad will appear? In the case of Google, it is Google AdSense, the dynamic digital advertising tool that delivered ads to over 2 million partner websites in 2022 (Nouri, 2022). The dynamic nature of these ads allows a website to fill its ad space based on the viewer’s criteria and behavioural insights. While Brierley et al. (2018) highlight the perspectives of individual targeting could just be seen as nothing more than surveillance, the utopian ideal would suggest individual viewers should be seeing more meaningful adverts based on their personal preferences.

For businesses and sites that host digital advertising, individually targeted ads should serve to increase customer trust and positively impact a stie’s brand image, due to the ever more meaningful content being targeted towards them. However, algorithmic biases could negatively affect the brand image as well. The site’s utilising harmful algorithms may be unintentionally promoting stereotypical gender norms or even alienating their audiences.

How do we Break the Mould?

In Sweeney’s (2013) study, an investigation into Google’s advertising technology exposed racial discrimination in the platform’s ad delivery. While highlighting the discriminatory nature of the ads, Sweeney (2013:35) also pointed out that this technology is exposing racial bias today, concluding that ad tech ‘’can do more to thwart discriminatory effects and harmonize with societal norms’’. However, where does the responsibility fall for ensuring Google’s algorithms are assuring stereotype fairness?

Ten years on from Sweeney’s (2013) study, we are still seeing algorithm bias within Google’s platform. More recent studies by Shekhawat et al. (2019) and Prates et al. (2020) continue to highlight how the widespread algorithmic biases enforce unfavourable preferences, from Google Ads to Google Translate. Advertisers who are using these platforms must ensure they are aware of the risks in the biases that may prevail in their targeting, alongside the impact this could have on their client’s brand image or platform.

Unravelling Google’s Code

While researchers can work on highlighting the importance of transparency in how algorithms are constructed, there needs to be more work on addressing the algorithms themselves. Google’s biases in its algorithms have been reported in the literature over the past ten years, and it poses the question of what impact this has had on society itself.

This article has looked at Google’s algorithm and its biases and what that poses for Ad targeting and society at large. There is a need for increased transparency in how datasets are managed and targeted. The academic research showcased shows that these machine biases are still prevalent today, but as we hopefully head towards a future filled with unbiased algorithms, the advertising industry must continue to critically evaluate the platforms that we advertise on.

Bibliography

Bold, B., 2023. IAB: digital adspend smashes £26bn ceiling as market grows 56% since pandemic. Campaign Live, 26 April, pp. 1.

Available at: https://www.campaignlive.co.uk/article/iab-digital-adspend-smashes-26bn-ceiling-market-grows-56-pandemic/1820540 [Accessed November 2023].

Brierley, S., Hardy, J., Macrury, I. & Powell, H., 2018. The Advertising Handbook. 4th Edition. Abingdon: Taylor & Francis Group.

Broeders, D., 2016. The Secret in the Information Society. Pholosophy & Technology, Volume 29(1), pp. 293–305.

Lambrecht, A. & Tucker, C., 2019. Algorithmic Bias? An Empirical Study into Apparent Gender-Based Discrimination in the Display of STEM Career Ads. Management Science, 67(7), pp. 2966–2981.

Nouri, S., 2022. Google Ads By Numbers For 2022. Ads Runner, 5 March, pp. 1
Available at: https://www.adsrunner.com/google-ads-by-numbers-for-2022/ [Accessed 5 November 2023].

Prates, M., Avelar, P. & Lamb, L. C., 2020. Assessing gender bias in machine translation: a case study with google translate. Neural Computing and Applications, Volume 32, pp. 1–33.

Roselli, D., Matthews, J. & Talagala, N., 2019. Managing Bias in AI. Companion Proceedings of The 2019 World Wide Web Conference, Volume 1, pp. 539–544.

Shekhawat, N., Chauhan, A. & Muthiah, S. B., 2019. Algorithmic Privacy and Gender Bias Issues in Google Ad Settings. Proceedings of the 10th ACM Conference on Web Science, pp. 281–285.

Simonite, T., 2015. Probing the Dark Side of Google’s Ad-Targeting System. MIT Technology Review, 6 July, pp. 1.

Available at: https://www.technologyreview.com/2015/07/06/110198/probing-the-dark-side-of-googles-ad-targeting-system/ [Accessed 10 November 2023].

Sweeney, L., 2013. Discrimination in online ad delivery. Communications of the ACM, 56(5), pp. 44–54.

Turner Lee, N., Resnick, P. & Barton, G., 2019. Algorithmic bias detection and mitigation: Best practices and policies to reduce consumer harms. Brookings, 22 May, pp. 1.

Available at: https://www.brookings.edu/articles/algorithmic-bias-detection-and-mitigation-best-practices-and-policies-to-reduce-consumer-harms/ [Accessed 1 November 2023]

--

--

Jordan Willis
0 Followers

Hi! I'm Jordan. I am passionate about writing insights and analysis in the world of Advertising. Hopefully you can find something that resonates with you!