Now You See Me — But You Still Can’t Catch Me

Five Reasons “Transparency” Won’t Stop Algorithmic Discrimination

Image of tall glass office buildings with wall-to-wall windows that reflect blue sky and clouds, from the perspective of the ground looking up.

Recently, Facebook shut down a prominent ad transparency project by NYU researchers, providing a forceful reminder that there’s a difference between transparency and accountability when it comes to regulating technology companies. Digital platforms and algorithmic software vendors are all too happy to provide “transparency” as a public relations measure, so long as it doesn’t impact their bottom line. Lawmakers should apply this lesson more broadly as members of Congress and agencies in the Biden Administration increasingly turn their minds to the issue of algorithmic bias and the damage it does to historically marginalized groups. Discriminatory impacts extend throughout numerous areas of life, including health care, housing, financial services, hiring, workplace conditions, interactions with law enforcement, and electoral integrity.

How can lawmakers distinguish between meaningful transparency obligations and what is little more than “transparency theatre”? The idea that transparency has limited effectiveness in achieving algorithmic accountability is something that expert scholars, lawyers, and advocates in the field have been pointing out for years, including proposing potential solutions. One approach that experts have advocated is to identify specific limitations of transparency obligations in the context of algorithmic bias, then implement laws and policies that address those limitations directly, rather than using transparency as a poor proxy for accountability. This post will highlight five reasons that focusing on transparency, without going beyond, will fail to achieve meaningful accountability on the part of major platform companies and software vendors that benefit from facilitating algorithmic discrimination.

1. Running into the “Black Box” Problem

The inner workings of some types of algorithmic decision-making systems are incomprehensible to humans and unexplainable by design, such as in systems that rely on deep learning. In these cases, total transparency with respect to the inner technological workings of a particular algorithmic decision-making system may be outright impossible. This is the notorious “black box” problem: we know what goes in and what comes out, but not what happens in between or why any particular input gives any particular output. The system’s developers could not explain it even if they tried or wanted to; indeed, on one level, that is the entire point of using machine learning — to complete tasks and find patterns beyond human capability or understanding. Given that, regulators must take care not to let technology companies use the “black box” nature of their products to perpetuate a false mystique that relieves them of accountability. The inability to explain should be a burden borne by those introducing and benefiting from putting others in harm’s way, and not redistributed as a negative externality imposed onto historically marginalized groups who bear the brunt of discriminatory algorithms.

2. Promoting Visibility Bias

Even where the internal workings of an algorithmic decision-making system are “explainable”, undue emphasis on transparency as an accountability measure may lead to a form of visibility bias: systems that more easily lend themselves to transparency receive more scrutiny, while potentially more harmful systems that are harder to lay bare escape scrutiny, despite needing it the most.

One example of this is a phenomenon which content moderation expert Evelyn Douek quasi-facetiously calls “YouTube magic dust”: the video platform’s (and its owner’s, Google’s) mysterious ability to dodge the level of sustained criticism and detailed interrogation from the public, politicians, and regulators that continually dogs its peers such as Facebook and Twitter, despite contributing to the same set of harms. One likely reason? YouTube is “harder to track”, especially when problematic content can be buried “forty minutes into an hour-plus clip” as opposed to distributed as images or text detectable at a glance.

3. Reinforcing Technological Determinism

Focusing on transparency at the expense of real accountability may contribute to a narrative of inevitability: emphasizing transparency around something takes for granted that there must be something to be transparent about. Jumping to transparency obligations for a particular program thus risks bypassing higher-order questions that need to take precedence, such as: should the system or tool even exist? Law- and policy-makers must resist taking for granted that the answer is already “yes”, when that may reflect more irresponsible abdication than immutable reality.

For example, supporters and vendors of facial recognition software claim that it is possible to remove or mitigate bias in their technology. However, this ignores how the mere existence of facial recognition and other forms of police surveillance exacerbates racial injustice and systemic oppression. The solution then is not, as Stop LAPD Spying writes about the LAPD’s LASER program, “a more equitable application” of such technologies, because that would fail to “address institutionalized racism, which [these technologies are] but one manifestation of”. The solution would be banning facial recognition and predictive policing technologies altogether, whether or not they are already in use; like other dangerous products, technology recalls should always be an option.

4. Offloading All the Work from Corporations onto People

Transparency obligations actually ask very little of technology companies at the end of the day: they simply have to provide information. This leaves actually doing something with that information — such as protecting civil rights or shutting down policies and features that actively exacerbate systemic discrimination — up to regulators, individual users, and the public. This offloading ignores massive power imbalances and remaining information asymmetry between the companies and, well, everyone else. Ananny and Crawford summarize this notion as transparency encouraging a “neoliberal model of agency”, which “places a tremendous burden on individuals to seek out information about a system, to interpret that information, and determine its significance.” This includes assuming that individual users have the time, resources, expertise, and bandwidth to accomplish those labor-intensive and unpaid tasks — in each and every area of life that algorithmic decision-making touches.

Moreover, companies may exploit transparency requirements to engage in “strategic opacity” — burying people and agencies in so much data and information that they become overwhelmed, defeating the whole purpose. Classic examples of this dynamic in action include digital platforms’ privacy policies, terms of service, and end user license agreements, which are notorious for the extent to which users do not read them and yet are often forced to “agree” anyway.

5. Distracting from or Replacing Substantive Action

Above all, transparency requirements risk near meaninglessness if they are not paired with ensuring either or both the government and vulnerable impacted groups have the ability to change or reject a course of action in the face of disclosed information. Regulators must be empowered to shut down company ventures and obtain redress for harmful business practices, and people — especially historically marginalized people — must have the ability to exercise a right of refusal when it comes to being subjected to algorithmic sorting and decision-making.

For instance, currently, many states do not require employers to inform or obtain consent from workers or job candidates if they are being monitored, surveilled, or subjected to algorithmic decision-making. The reason that notice is important is that it is a precondition to the ultimate objective of enabling informed consent, and to allow workers and candidates to opt out if they wish. However, factors underlying socioeconomic inequality — such as poverty, lack of institutional power, and unfair labor practices — systematically constrain workers’ choices, forcing them to agree to work conditions under circumstances that do not constitute meaningful consent. If opting out is not possible even where transparency exists, then transparency risks becoming performative rather than an effective tool with which to hold power accountable.

None of the above is necessarily to oppose calls for transparency or efforts to impose transparency requirements. The point is that such initiatives must ensure transparency serves accountability rather than become an end in itself. Further, lawmakers and regulators must guard against Silicon Valley turning transparency into a red herring that siphons attention and energy away from substantive reforms that go to the heart of issues such as discriminatory advertising, disinformation campaigns, or online abuse. Otherwise, transparency may amount to little more than a kind of 21st-century confession for the tech industry, where the act alone of reporting one’s own wrongdoing almost serves as absolution in and of itself. However, vulnerable and historically marginalized communities suffering the harms of algorithmic discrimination deserve much more than to know that violative practices are occurring. Justice demands that they are not subjected to such discrimination and civil rights violations in the first place — no matter how transparent companies are about it.

Cynthia Khoo is an associate with the Center. You can follow her on Twitter.

--

--