Algorithms’ Transparency Problem is Everyone’s Problem

Ben Winters
Data & Society: Points
5 min readMay 10, 2023

--

Without meaningful transparency, enforcement of any civil or consumer rights is nearly impossible.

Photo by Wilhelm Gunkel on Unsplash

Predictive algorithms are used to influence decisions about everything from how much bail you have to pay if arrested and booked, to how much your interest rate will be if you use Klarna to buy headphones, to whether you’re likely to host a party in the AirBnB you’re booking. But you have no way of knowing exactly how the entities implementing those algorithms make these determinations.

Clearly, algorithms have a transparency problem. There’s often no way to know how a system is interpreting you — and little opportunity for recourse if you do figure it out. In that way, algorithmic systems categorize people while limiting their ability to be meaningfully seen or heard. Companies use exemptions to open government laws, trade secrets, and marketing to shield disclosure or understanding of the factors used in each algorithm, the methods used to develop it, and the sources of data fed into a system. If done right, algorithmic transparency can deter thoughtless development and deployment, and empower harmed parties to enforce civil rights law. Without meaningful transparency, enforcement of any civil or consumer rights is nearly impossible.

Public, proactive, enforceable

Though it has been rightfully criticized as insufficient, transparency remains an essential step in righting the wrongs wrought by algorithmic systems. Groups like the AIAAIC Repository, which compiles hundreds of examples of algorithmic harms in the news, and Eticas’ Observatory of Algorithms with Social Impact, which organizes key details about known algorithms around the world, are making substantial achievements in transparency. And reports like Our Data Bodies and Data for Black Lives, as well as those by groups like the Algorithmic Justice League, help reveal the lived experiences of the victims of algorithmic harm. In limited circumstances, governments have endeavored to create a register of algorithms. But we have seen how such register can be limited and incomplete: In Amsterdam, for example, the register includes just three algorithmic systems, and only their least contentious uses. To have real impact, transparency needs to be public, proactive, and enforceable. It needs to transcend limits of trade secrecy and irresponsible procurement. This is all easier said than done.

Audits, impact assessments, and design evaluations have the potential to evaluate the risks of an algorithmic system, but they are not enough on their own. We have seen how, left to their own devices, some companies use audits as little more than a PR move. In 2021, for example, the AI-driven hiring system HireVue disclosed that the company had undergone at least two audits by third-party organizations, but did not freely release those audits in full. (Eventually they were both made public — but only with significant restrictions on publication, and with access requiring the disclosure of personal information.) In quick succession, the company announced it had undergone audits showing that its software “does not harbor bias” and that the “scientific foundation of HireVue assessments” was affirmed, then said it would stop including facial analysis in its standard offering. Still, key details about the algorithms that the company uses to make judgments in the hiring process are kept secret from applicants under evaluation.

Similarly, in 2012, the FTC ordered Facebook to have biennial independent third-party audits performed for twenty years. The results of these audits have never been made public, and in 2019, Facebook was found to have violated the consent order. The year before, in 2018, the company agreed to perform a civil rights audit after members of Congress and over 100 civil rights organizations pressured them to do so. But the subsequent auditor’s report explained that Facebook did not provide access to sufficient information to meaningfully assess their civil rights impact. Even with limited access, though, the auditors grew “concerned that [any] gains could be obscured by the vexing and heartbreaking decisions Facebook has made that represent significant setbacks for civil rights.”

Impact assessments and design evaluations can be used to collect information about what data is used in a system and how a system is built, and to evaluate its effectiveness. While audits analyze the performance of a system against certain defined metrics, an impact assessment focuses more on how the system is used and interacts with other entities. Requirements to use them are part of proposed legislation like the federal Americans Data Protection and Privacy Act and Algorithmic Accountability Act (pertaining to the private use of AI), and in state bills in Washington and California, as well as federal executive orders and regulations that are active in other countries, including Canada (in relation to the public use of AI). When compared to audits, relatively few assessments have been required or made public. In recognition of these tools’ unsettled status, the National Telecommunications and Information Administration recently released a request for information about “AI Assurance” mechanisms, with an eye toward issuing a report with recommendations on best practices.

Toward meaningful remediation of algorithmic harms

At a minimum, effective algorithmic transparency will require regular independent audits that operate from an understanding of the purpose and proposed use of the system in question, including what decisions it will make or support. It will require establishing a system’s intended benefits, its logic, its capabilities (including those outside the scope of its appropriate use), its data inputs, and how that data is treated. It will require conducting validation studies and audits of accuracy, bias, and civil rights implications. And it is not enough for companies to submit this information to regulators: It must be available to the public, with consequences for incomplete or incorrect disclosures.

For algorithms used in the public sector, the use of taxpayer money and the power imbalance between the state and the individual demand strict and proactive disclosures — as well as proactive proof that a system has a justified purpose, can do what it says it can do, and is not discriminatory. For those in the private sector, clear and verifiable unfair trade practices must be articulated, protections must be given to individuals, similar proactive proof should be required, and unverifiable or discriminatory algorithms must be banned. Governments and the corporations that profit from the development and sale of these algorithms must bear the responsibility of giving people the tools they need to understand how algorithms affect them, and the information they need to protect themselves. While certain agencies, nonprofits, and individual researchers are making progress in this area, they should not bear the brunt of guesswork, or be held responsible for creating accountability mechanisms in the absence of substantial infrastructure.

As algorithmic harm is becoming more well publicized and understood, and regulations begin to grant long-needed consumer rights, mandated meaningful algorithmic transparency is a necessary step — but it is only the first step. There must be constant pressure on legislators to enact laws with privacy and algorithmic protections, as well as enforcers to protect consumers against rampant unfair and deceptive practices. The human decision-makers who are essential to the process of inflicting algorithmic harm must be known, and they must disclose the decisions they are making. Because it is ultimately people who are responsible for algorithms’ transparency problem, it will be people who are key to resolving it.

--

--

Ben Winters
Data & Society: Points

Senior Counsel at the Electronic Privacy Information Center | Instructor at UDC Law School | newsletter at algoharm.org