Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising

Michelle Lam
ACM CSCW
Published in
6 min readOct 3, 2023

This blog post covers the paper “Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising” by Michelle S. Lam, Ayush Pandit, Colin H. Kalicki, Rachit Gupta, Poonam Sahoo, and Danaë Metaxa. This paper introduces the method of Sociotechnical Auditing, a system for conducting sociotechnical audits, and a two-week sociotechnical audit (N=244) investigating the efficacy of targeted online advertising. The work will be presented at the 26th ACM Conference On Computer-Supported Cooperative Work And Social Computing (CSCW 2023).

(L) “Sociotechnical Audit Method” with “Phase 1: Observation — Algorithm Audit” that shows users -> “[delta] user behavior” -> algorithm, and “Phase 2: Intervention — User Audit” that shows an algorithm -> “[delta] algorithm behavior” -> users. (C) “Intervenr System”: Auditors and Paid participants feed into a web app + browser extension. (R) “Case Study: Targeted Advertising”: participant web browsers with “Week 1: Observe & analyze ad targeting” and “Week 2: Perform ad-swap intervention”
Sociotechnical Audits (STA). Method — Sociotechnical audits evaluate algorithmic systems through a sociotechnical lens, evaluating the technical system and its impacts on users as both influence each other. Intervenr System — We introduce a platform for deploying longitudinal, browser-based, user-centered sociotechnical audits with paid participants. Case Study — We conduct a two-week sociotechnical audit of targeted advertising (N=244).

Our current tools for evaluating and auditing algorithmic systems are powerful tools for evaluating the technical components of a system. However, these audits stop short of a sociotechnical frame, which would also consider the substantial role of users as a dynamic part of the system. An algorithm audit of an ad targeting algorithm could surface skewed delivery to users of different races or genders, but it could not capture how users interpret and internalize the ads they receive en masse, how these ads shape their beliefs and behaviors, and how the targeting algorithm in turn would change in response to these shifts in user behavior. We need auditing methods that would allow us to consider and measure systems at the sociotechnical level.

Before we dive into the details, here’s a quick summary of our paper:

  • Noting a gap in the lens of algorithm audits, we introduce the sociotechnical audit (STA) as a method to systematically audit not only the algorithmic components of a system, but the human components — how users react and modify their behavior in response to algorithmic changes.
  • We instantiate the sociotechnical auditing method in a system called Intervenr for conducting sociotechnical audits by coordinating observation and in situ interventions on participants’ web browsers.
  • We demonstrate a two-week sociotechnical audit of targeted advertising (N=244) that investigates the core assumption that targeted advertising performs better on users. We find that targeted ads indeed perform better with users, but also that users begin to acclimate to different ads in only a week, casting doubt on the primacy of personalized ad targeting given the impact of repeated exposure.

Sociotechnical Auditing

Our work addresses this gap by proposing the concept of a sociotechnical audit (or STA). Formally, we define an STA as a two-part audit of a sociotechnical system that consists of both an algorithm audit and a user audit. We define an algorithm audit as an investigation that changes inputs to an algorithmic system (e.g., testing for a range of users or behaviors) and observes system outputs to infer properties of the system. Meanwhile, we define a user audit as an investigation that changes inputs to the user (e.g., different system outputs) and observes their effects to draw conclusions about users.

“Sociotechnical auditing” with “Algorithm audit” + “User audit” boxes beneath it. The “Algorithm audit” box shows users and an arrow labeled with a delta symbol leading into the algorithm. The “User audit” box shows an algorithm and an arrow labeled with a delta symbol leading into the users.

The CSCW community has long championed a sociotechnical frame, but the problem is that it is challenging to instantiate the kind of sociotechnical audit we describe. Just as algorithm audits must gain an understanding of technical components by probing them with varied inputs and observing outputs, sociotechnical audits must gain an understanding of human components by exposing users to varied algorithmic behavior and observing the impact on user attitudes and behaviors.

Intervenr: A System for Sociotechnical Auditing

To address these challenges, we develop a system called Intervenr that allows researchers to conduct sociotechnical audits in the web browsers of consenting, compensated participants. Comprising a browser extension and web application, Intervenr is designed to perform sociotechnical audits in two phases.

“Onboard” feeds into “1: Observational phase,” which shows a web browser labeled “Baseline experience (N weeks)” followed by a “Midpoint survey.” This points to “2: Intervention phase,” which shows a web browser with modified content labeled “Intervention experience (N weeks)” followed by a “Final survey.” This points to “Offboard.”
Our sociotechnical audit study design. Participants first onboard to the study and then enter the Observational Phase. During this phase, the system captures their baseline experience and conducts a midpoint survey on that experience. Then, participants enter the Intervention Phase, in which the system enacts the intervention and conducts a final survey, after which participants are offboarded and compensated.

In the initial observational phase, Intervenr collects baseline observational data from a range of users to audit the technical component of the sociotechnical system. Then in the intervention phase, Intervenr enacts in situ interventions on participants’ everyday web browsing experience, emulating algorithmic modifications to audit the human component.

“Web application” has components for “onboarding” and “extension,” which deal with Extension Interfacing, and components for “participant” and “auditor,” which handle the Intervenr Webpage. The Extension Interfacing components communicate with the “Browser extension,” which coordinates with an ordinary web browsing window and with the “Database”. The Intervenr Webpage components communicate with the “Database”, which in turn feeds into a “Data analysis pipeline”.
The Intervenr System for sociotechnical audits. Web application — Implements interfaces to manage auditors and study participants. Coordinates with the browser extension to collect data and enact interventions. Browser extension — Acts during participants’ ordinary web browsing to collect media and user actions as well as perform in situ interventions on webpages. Database and data analysis pipeline — Stores collected media, survey responses, and post-processing results. Performs offline processing of the data collected in the audit.

Case Study: A Sociotechnical Audit of Targeted Advertising

To demonstrate the new insights afforded by sociotechnical audits, we deploy Intervenr in a case study of online advertising to answer a central question: Does ad targeting indeed work better for users? Given the opacity of ad platforms and ad targeting’s reliance on invasive data collection and inference practices, questions remain regarding how targeted ad content impacts users over time, and whether its costs are justified — questions that require a sociotechnical approach to answer.

(Top) Week 1: Observation — Partner A and Partner B each have web browsers with ads. (Bottom) Week 2: Intervention — Partner A now has ads from Partner B in their browser, and vice versa, as indicated by arrows flowing from the top section to the bottom section.
Ad-swapping intervention. We first randomly assign participants a partner. In the observational phase, participants receive online ads as usual. In the intervention phase, we swap all ads between the partners so that Partner A will only see ads targeted to Partner B, and vice versa.

Study design. We pair a one-week observational study of all the ads users encounter in their web browser (an algorithm audit) with a one-week ablation-style intervention study that allows us to measure user responses to ads when we “break” targeting (a user audit). In the first week of our case study, we passively observe all ads delivered to participants. This traditional audit portion of our study allows us to measure canonical metrics like views and clicks, but also important dimensions at the locus of the user, like users’ interest and feeling of representation as they relate to ad targeting. In the second week, we randomly pair participants, swapping each participant’s ads with ads originally targeted to their partner. In addition to observing user behavior, we conduct participant surveys after each study phase that cover a subset of the ads collected; together, these produce both user-oriented metrics (ad interest and feeling of representation in ads) and advertiser-oriented metrics (ad views, clicks, and recognition).

Findings. Over the two-week study, we collect over 500,000 advertising images targeted to our study participants. Overall, we find that participants’ own targeted ads outperform their swap partners’ ads on all measures throughout the study, supporting the premise of targeted advertising. However, we also observe that swap partners’ ads perform more highly with users at the close of the study (after only a week of exposure) than at the midpoint (before participants were exposed to their partners’ ads). This is evidence that participants acclimate to their swap partners’ ads, suggesting much of the efficacy of ad targeting may be driven by repeated exposure rather than the intrinsic superiority of targeting.

Left: Plot of Ad interest and Ad representativity metrics for the Observational and Intervention phases. Participants’ own ads (observational phase) outperform swap partners’ ads (intervention phase). Right: Plot of Percent Change in Ad interest and Ad representativity metrics for Partner ads and Self ads. Swap partners’ ads perform more highly at the end of the study (after 1 week of exposure) than at the midpoint.

Takeaways

While an algorithm audit could reveal whether today’s existing targeting methods provide user benefit, a sociotechnical audit allows us to discover how that user benefit changes in response to alternative algorithmic methods. In particular, this approach reveals that user sentiment toward ads may be more malleable than we expect, and casts doubt on the necessity of hyper-personalized and privacy-invasive targeting methods.

By conducting audits that conceive of algorithmic systems as sociotechnical and investigate both their technical and human components, we can form a richer understanding of these systems in practice. Sociotechnical audits can aid us in proposing and validating alternative algorithm designs with an awareness of their impact on users and society.

Michelle S. Lam, Ayush Pandit, Colin H. Kalicki, Rachit Gupta, Poonam Sahoo, and Danaë Metaxa. 2023. Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising. Proc. ACM Hum.-Comput. Interact. 7, CSCW2, Article 360 (October 2023), 37 pages. https://doi.org/10.1145/3610209

--

--