Looking for Race in Tech Company Ethics

Identifying tensions where race and tech ethics intersect

Jacob Metcalf
Data & Society: Points
10 min readSep 22, 2020

--

By Data & Society Researchers Jacob Metcalf and Emanuel Moss

This blog post expands on Metcalf and Moss’s report Ethics Owners: A New Model of Organizational Responsibility in Data-Driven Technology Companies. At the bottom, you’ll find a resource list for those interested in diving more into the intersection of race, ethics, and technology.

Graphic by Yichi Liu

In Silicon Valley, a curious job is increasingly common within tech companies: the “ethics owner,” who is responsible for designing organizational practices to instill ethics across the company. In our report Ethics Owners: A New Model of Organizational Responsibility in Data-Driven Technology Companies, we discuss the many challenges ethics owners face by “doing ethics.”

We found that those who are closest to the harms produced by tech are often furthest from the source of power that decides solutions to those harms. Two key tensions emerge between racial justice and ethics work at technology companies: first, technology companies are not organized in a way that asks the right questions about the harmful impacts of their technologies, particularly on Black, Indigenous communities of color; and second, even when an ethics owner does speak up about harmful impacts, they run up against a toxic work culture that isn’t receptive to or supportive of their concerns.

…those who are closest to the harms produced by tech are often furthest from the source of power that decides solutions to those harms.

It’s important to acknowledge that the ethics owners we interviewed are generally well-paid graduates of elite universities, people who exist within the already fraught gender dynamics of Silicon Valley, and tended to be (with few exceptions) white. The racial homogeneity of Silicon Valley ethics owners is problematic, because many ethics techniques in Silicon Valley involve technologists sitting together in a room “thinking really hard” about how the lives of others, which seldom resemble those in the room, might be affected by the products and services Silicon Valley builds. Our report suggests that ethics owners engage in outreach to advocacy groups in order to bring the experiences of those outside of tech companies into these decision-making rooms, and encourages sharing lessons-learned across the industry. But these measures cannot replace true inclusion, particularly when it comes to Black, Indigenous, and people of color (BIPOC) ethics owners. Ultimately, companies must change who is already in such rooms so that those sitting around tables “thinking hard” about the ethical implications of products and services include people who have experienced the world in diverse ways.

Demographic representation on its own does not solve the set of problems hindering the practice of ethics in tech companies.

Demographic representation on its own does not solve the set of problems hindering the practice of ethics in tech companies. As the #BlackLivesMatter movement reminds us, backed up by years of research and activism, diversity and inclusion initiatives have been insufficient to disrupt white male hegemony inside the tech industry. This is in part due to how organizations are structured to ask and answer questions about the harms their products and services might produce, but also is about how power relations and racial hierarchies persist, even in the flat organizational structures Silicon Valley has embraced. We have reflected on the outcry for racial justice and the sweeping changes seen across the globe that have occurred in response since we conducted our research. We find that there are two key tensions where race and tech ethics intersect through organizational practices — practices that shape the contexts in which ethics owners are expected to do their work:

Limitations in Organizational Structure

First, while outside critics persuasively argue that some technologies are inherently harmful and will never be equitable, tech companies themselves are not currently organized in ways that allow them to even ask, let alone answer, the question of whether or not the products and services they build are harmful. Racist harms, such as higher error rates for predictions made about BIPOC, offensive search and ad results associated with BIPOC topics, and the incorporation of BIPOC into databases that facilitate surveillance and incarceration are facilitated and compounded by a lack of organizational structures for governing ethics. Poor organizational decisions about where, when, and how to do “ethics work” can allow drastic harms to escape scrutiny because there might not even be a place to discuss such harms within a company.

Given the way tech companies are currently configured, there is rarely a functioning internal mechanism by which the right questions could even be posed, let alone answered, to prevent such harms.

Consider Amazon’s response to the Gender Shades studies as an example of a glaring organizational ethics failure in the product management process, rendering it impossible to even know if the company’s product was causing racist harms. In 2018, Joy Buolamwini and Timnit Gebru demonstrated that facial recognition APIs offered by major tech companies were inaccurate when tasked with recognizing women’s faces and faces with dark skin tones, and most inaccurate for women with the darkest skin tones. In a second study, Buolamwini and Inioluwa Deborah Raji demonstrated that Amazon’s Rekognition service lagged significantly behind its competitors in terms of accuracy for women with darker skin types. This posed a serious problem because Amazon was already marketing and licensing the Rekognition API to law enforcement agencies, where the high false-positive rate for Amazon’s tool could create an unjustly high risk of false-arrests for Black people.

Amazon’s response to the Gender Shades authors revealed much about the limits of their product-management and ethics practices.They focused on scientific methods and statistical criteria, rather than the consequences of racial bias in algorithmic systems. Matt Wood, general manager of AI at Amazon Web Services, argued that Buolamwini and Raji were mistaken because they had tested the “default confidence” threshold rather than the “maximum confidence” threshold that Amazon advises law enforcement agencies to use. However, subsequent reporting by Gizmodo demonstrated that the only publicly-acknowledged law enforcement client of Rekognition was unaware of the importance of confidence threshold settings and had not received training from Amazon on using the tool correctly. A recent case from Detroit demonstrated the consequences of poorly-calibrated practices and policies for setting and understanding confidence thresholds in algorithmic policing. This was a full six months after an ACLU-led audit had embarrassed Amazon by misidentifying members of the Congressional Black Caucus as “wanted criminals” using Rekognition’s recommended settings.

Amazon offered the same defense about confidence thresholds in response. In other words, Amazon had no idea how its most prominent customers were using its potentially dangerous product in a fraught context, even though it was aware that confidence thresholds were an issue. Amazon affirmed to members of Congress that this was a case of intentional ignorance, claiming it could not audit client uses of its service because of “privacy.” To be clear, there is no privacy law that forbids safety training and monitoring in such a contractual relationship. Secondly, there is no norm of enterprise software contracting that would preclude regular safety auditing of an early-stage product such as Rekognition. Only in the past month, in response to the #BlackLivesMatter movement and this prior critical scholarship, did Amazon suspend law enforcement use of Rekognition.

The much-needed discussions of culture, diversity, equity, and inclusion in tech must be complemented by consideration of ethics governance as an organizational capacity.

Of course, no company should build biometric surveillance tools designed to racially profile in the first place. No amount of ethics governance is going to make such technology equitable. What we hope to add to the ethical critiques of such technologies is an organizational focus: given the way tech companies are currently configured, there is rarely a functioning internal mechanism by which the right questions could even be posed, let alone answered, to prevent such harms. Even if a company is interested in asking whether or how their products might produce harms for society, the configuration of job roles, marketing, contracting, and client management processes of most companies precludes them from interrogating very real and urgent ethical concerns about data technologies because such concerns lie outside existing job responsibilities. The much-needed discussions of culture, diversity, equity, and inclusion in tech must be complemented by consideration of ethics governance as an organizational capacity.

Toxic Tech Company Cultures

Second, even when ethics owners attempt to address the harmful consequences of technology, they are still subject to the existing toxic cultures of technology companies. In contrast to the Rekognition case, this is not about engineering or designing the features of a technical system; rather, it is about the particular unsustainable costs imposed on tech workers who are predominately BIPOC, and how such burdens—which drive away many of these workers — limit the company’s capacity to see the harmful consequences of its products.

In July 2020, Ifeoma Ozoma, a prominent and highly effective Black tech policy expert, publicly announced that she had left her position earlier this spring as public policy and social impact manager at Pinterest because she had faced racial harassment (along with co-worker Aerica Banks). Her announcement followed Pinterest’s public relations tweet supporting #BlackLivesMatter protests. The company had credited Ozoma for many of its widely publicized policy changes in recent years, particularly for removing anti-vaccine disinformation and down-ranking or removing plantation wedding boards — precisely the type of cross-functional ethics coordination we describe as “owning ethics.” However, those successes were met with racial harassment in the form of being publicly doxxed by a white co-worker and receiving negative marks on a performance review for not providing arguments in favor of promoting plantations. Ozoma also pointed to the abuse of a “flat” corporate structure as an excuse for underpaying and undervaluing Black employees and their perspectives. For BIPOC employees, these organizational structures can introduce additional avenues for having one’s expertise challenged without granting legible forms of authority to rebut such challenges.

Organizational structures…need to be interrogated for extra burdens on BIPOC employees, particularly in contexts that overlap with ethics.

As our report argues, navigating flattened hierarchies and distributed responsibilities is a defining feature of “ethics owner” roles; ethics owners routinely engage with nearly every division of a company, no matter the reporting structure. Organizational structures, and many other organizational practices inside tech companies, need to be interrogated for extra burdens on BIPOC employees, particularly in contexts that overlap with ethics. Addressing the well-documented racist culture of the tech industry must be a core component of building organizational capacity to “do ethics” because that culture continues to drive out expert voices that enable tech companies to see, understand, and mitigate potential harms. Thinking hard in a room about other people’s lives is not going to cut it as a method for doing ethics, especially if that room is configured to exclude expertise about those lives.

Technology plays a role in shaping and reconfiguring racial identity and the distribution of power along racial lines…

Technology plays a role in shaping and reconfiguring racial identity and the distribution of power along racial lines — even beyond workplace practices inside tech companies. This is particularly true of the role technology has long played in anti-Blackness through the development of tools and technology that surveil, discipline, and police BIPOC communities and individuals to perpetuate harmful racial hierarchies. Without an ability to perceive, understand, and respond to how its products still participate in these harms, Silicon Valley will not be able to reduce them. Many of the products Silicon Valley builds are fundamentally incompatible with a society that values Black lives. And any fulsome approach to ‘tech ethics’ inside Silicon Valley must reckon with this legacy of anti-Blackness and actively combat it in the future.

Resource List

Below, we have compiled a dynamic, growing set of resources for those looking to dive further into ethics in the tech industry, highlighting resources that address the relationship between race, technology, and societal harms:

Beyond Diversity & Inclusion

Race and Technology Scholarship

Advocacy, Outreach and Coalitions

Case Studies

Informal Meetings

Jacob Metcalf is a researcher at Data & Society, where he is a member of the AI on the Ground Initiative, and works on an NSF-funded multisite project, Pervasive Data Ethics for Computational Research (PERVADE).

Emmanuel Moss is a researcher for the AI on the Ground Initiative at Data & Society and a research assistant on the Pervasive Data Ethics for Computational Research (PERVADE) project.

--

--