It’s not just the tech sector that isn’t representative — it’s the people making the laws, too

Yesterday the House Committee on Science, Space, and Technology held a hearing on the ethical and societal implications of artificial intelligence (AI). Expert panelists, including our friends Joy Buolamwini and Meredith Whittaker, consistently highlighted the importance of fostering a technology sector that is diverse and representative, so that technology like AI is designed in ways that do not disproportionately harm women and minorities.

But strikingly, the committee itself — which is tasked with addressing policy related to federal technology and science research — is overwhelmingly white and male. Of the 39 members of the committee, only 11 are women — a paltry 28%. Among the 17 Republican members, only two are women — and one of them is the non-voting Resident Commissioner of Puerto Rico. The stats are even worse for non-white members of the committee, who are hardly represented at all.

Diversity is needed not just in the design stages of AI tools; it is also needed in the design stages of AI regulation. A legislative body that lacks the voices of the people who are most impacted by emerging technologies inevitably will have blind spots about their negative effects.

Technology, particularly the automated systems that underlie AI, is designed by humans who encode their own values into the design and implementation of systems. Humans choose the data on which to train and evaluate their tools, and consequently who is included — or excluded — by those tools. Humans choose how to deploy technology, and consequently who faces its negative effects. If the humans making these choices don’t reflect the diversity of our world, the technology can end up benefiting only those designing it.

A lack of diversity in the technology sector, and a lack of attention to the disproportionate impact of technologies, results in face recognition systems that misidentify black people at disproportionately high rates, healthcare algorithms that recommend more resources to white patients than black patients, and recruiting algorithms that devalue resumes containing all-women’s colleges. It results in algorithmic bias and discrimination, when tools trained on biased data automate and perpetuate historical injustices.

The statistics on diversity in tech are not encouraging. Women make up only 24.4% of the computer science workforce, with median salaries at 66% of what their male counterparts earn. At Google, only 2.5% of full-time workers are black, and 3.6% are latinx; Microsoft reports having 4% black workers and 6% latinx workers. And the numbers only get worse the higher up in management you go. But it’s not just the tech sector that isn’t representative — it’s the people who regulate it as well.

The current Congress is the most racially and ethnically diverse it has ever been, with 22% of members identifying as minorities. But it’s clear from yesterday’s hearing that those in charge of technology policy have not caught up. Of course, representation is not a panacea. Concrete protections are needed for people who are impacted by emerging technologies. But any policy discussion about the ethical and societal implications of AI must start with a diverse and representative group of voices.

--

--

Jameson Spivack
Center on Privacy & Technology at Georgetown Law

Associate, Center on Privacy & Technology at Georgetown Law. Focusing on the policy and ethics of AI and emerging technologies. Hoya + Terp.