Testifying for the DC Stop Discrimination by Algorithms Act

Algorithmic discrimination is building a more deeply stratified society — while concealing the ghost of systemic oppression in the machine.

Zoom meeting featuring 3 video windows: Chairperson Robert White in upper left, a testimony countdown timer in upper right, and Senior Associate Cynthia Khoo centered underneath.

On September 22, 2022, Senior Associate Cynthia Khoo testified on behalf of the Center on Privacy & Technology at Georgetown Law, to support the Stop Discrimination by Algorithms Act (Bill B24–558) (SDAA) in a hearing before the Committee on Government Operations and Facilities at the Council of the District of Columbia (DC Council). The Center was one of several civil rights, consumer protection, and privacy and technology policy organizations to testify in support of the SDAA.

The SDAA was introduced in December 2021, as a necessary piece of legislation to ensure that civil rights law remains equal to the threats posed to historically marginalized groups by algorithmic discrimination. The Center contributed to the development of this bill, in partnership with Georgetown Law’s Communications and Technology Law Clinic (CTLC) led by Professor Laura Moy, Color Of Change, and the Office of the Attorney General for the District of Columbia (OAG DC).

Read the full oral testimony below:

Good afternoon, Chairperson White and Members of the Committee,

My name is Cynthia Khoo and I’m a Senior Associate at the Center on Privacy and Technology at Georgetown Law. The Center is a law and research think-tank focused on the privacy rights of historically marginalized groups. This includes the civil rights implications of commercial data practices, such as algorithmic decision-making.

I’m testifying in support of Bill B24–558, the Stop Discrimination by Algorithms Act, or the SDAA. We respectfully urge the committee and eventually DC Council to pass this bill. I’ll briefly provide three key points now, and will elaborate on them in written testimony to follow.

My first point is simple: Banning algorithmic discrimination is the right thing to do. You may hear objections to this bill from industry, such as the obligations are a lot of work, or will cost them money. We acknowledge that’s the case, but we must also recognize that the right thing is often not the easy thing, nor the cheap thing. Justice demands that those who profit from deploying algorithmic decision-making tools into the world do the hard work to ensure that those profits are not reaped at the expense of vulnerable communities and their civil rights. This legislation sets the groundwork to ensure that basic responsibility is fulfilled.

My second point goes to the heart of this bill: algorithmic discrimination is different from traditional discrimination. To illustrate, take a high school or college graduate. They’re moving, looking for work, applying for further education. But the apartment declines them because the tenant screening algorithm matched a racially biased arrest record to the wrong name. They miss out on a perfect job opportunity because the targeted ads algorithm wrongly assumed people of their age and gender wouldn’t be qualified. The college rejects them because the video interview algorithm scored their personality using indicators that misinterpret their disability as “negative behaviour”.

In all these cases, algorithmic decision-making doesn’t just discriminate based on who you are, but on who the algorithm predicts you to be, even if that so-called ‘prediction’ is just a wildly inaccurate guess. Worse, these systems automate that discrimination and impact people’s futures without them even knowing. Companies running automated decision algorithms silently in the background are building a more deeply stratified society than ever. To add insult to injury, all of this hidden discrimination is dressed up as state-of-the-art scientific objectivity, concealing the ghost of systemic oppression in the machine. The SDAA bridges the gap between current human rights law and these new dangers that algorithmic discrimination presents.

Third and last: I encourage you to question the idea of “good actors” and “bad actors” when it comes to algorithmic discrimination. You may hear these terms, along with critiques of the bill on grounds that it penalizes companies for unintentional discrimination. However, companies are already prohibited from unintentional discrimination under existing law. Intent is irrelevant. This is well established. The discrimination itself is what matters — regardless of whether an organization directly discriminates against someone to their face, or discriminates against them through automated software.

These software tools are part of broader sociotechnical systems, where the development and deployment of a technology is inseparable from the inequity in its surrounding context, and inseparable from the unjust values it often upholds. The ubiquitous sales and use of personal data in a society where systemic oppression persists makes discriminatory bias almost unavoidable in many algorithmic decision-making systems. This is why the solution must be to ban the practice where it affects survival needs and important life opportunities. You can do this by passing the SDAA.

I greatly appreciate the committee’s attention to this critical issue, and thank you for the opportunity to present this testimony.

--

--