Too Big a Word

What does it mean to do “ethics” in the technology industry? We found four overlapping meanings

Emanuel Moss
Data & Society: Points
10 min readApr 29, 2020

--

By Emanuel Moss and Jacob Metcalf, Data & Society Researchers

Abstract pattern of emerald green lozenges intersecting on black background, version 1

In 1976, Welsh theorist and critic Raymond Williams published a thin volume titled Keywords to discuss words whose “meanings seemed … inextricably bound up with the problems it was being used to discuss.”

A broad range of crises have convulsed the tech industry in recent years, from the Snowden revelations to racially biased algorithms, from Cambridge Analytica to the Google Walkout, or from ICE contracts for data brokers to censored search engines. The keyword inextricably bound up with discussions of these problems has been ethics. It is a concept around which power is contested: who gets to decide what ethics is will determine much about what kinds of interventions technology can make in all of our lives, including who benefits, who is protected, and who is made vulnerable.

Today, amidst the COVID-19 crisis, the tech industry is beginning to reconsider its role in what is likely to be a dramatically shifted society and economy. These shifts highlight the stakes of private companies that rely on data to operate at the scale of entire global supply chains, particularly when those companies provide public goods, whether that be the information infrastructure of online media or the means by which public health emergencies are tracked. Such stakes only raise the profile of the already outstanding questions about what it actually means to “do” technology ethically: What kinds of roles are necessary within tech companies? What does it mean to turn corporate values and public values into tangible outcomes through rigorous processes? What kinds of regulatory approaches might be necessary beyond these efforts?

The ethics of technology looks very different outside the tech industry than it does on the inside…

The question of what ethics is has very different answers depending on where one sits. The ethics of technology looks very different outside the tech industry than it does on the inside, and not simply because of conflicting principles or values between techies and their critics. Within the tech industry, a new type of job role has developed: ‘Ethics owners’ who have been tasked with overseeing the portfolio of ethical concerns across their companies. Ethics owners sit squarely at the hub where different meanings of ethics, and contests over which meanings might prevail, come into contact.

Ethics owners often work between their company’s legal team and product teams to ensure that the design of products and services do not cause harms to users or expose the company to the risk of a lawsuit or regulatory action. They serve on or coordinate the activities of ethical review boards that are tasked with ensuring that products and services comport with public statements of their company’s values or principles. They also work to ensure the safety of the things their company makes, developing procedures that can ensure products and services perform as expected by their designers. At the same time, they are often tasked with being responsive to the public, and to the employees they work alongside, about the societal impacts of the things their company builds — both locally and globally. This expansive set of concerns seems hard for any one person to wrap their head around, particularly when there are passionate constituencies offering their own critiques of ethics owners’ work from multiple angles.

Abstract pattern of emerald green lozenges intersecting on black background, version 2; looks like precarious Jenga blocks

The adjective ‘ethical’ can describe both an outcome, a process, or a set of values, any of which has different connotations from a technologist’s point of view. An outcome might consist of a cloud services company declining a contract with an abusive government or agency, or equalizing error rates across protected classes in an automated hiring system. A rigorously ethical process can look like a committee of stakeholders convening to develop a plan for addressing potential harms of a new product or a review team analyzing an engineering requirements document. Values describe states’, humans’ (or any moral being’s) desires, such as beauty, justice, or wealth. Ethical values are those values that are most pertinent to assessing the moral correctness of the decisions we face, and in tech contexts values such as transparency, equity, fairness, and privacy are often pertinent. Ethical values describe the end state to which we are steering as we make carefully considered choices.

…It is common that an ethical process (i.e., appropriately rigorous) could lead to unjust or harmful outcomes even when both the process and outcome are judged according to the same ethical values…

Perhaps confusingly, it is common that an ethical process (i.e., appropriately rigorous) could lead to unjust or harmful outcomes even when both the process and outcome are judged according to the same ethical values (i.e., what a just world looks like). For example, a social media company may wish to achieve the ethical value of transparency in its content moderation policy, and thereby establish an ethical process by which moderation decisions are rigorously reviewed, but still result in outcomes that fall short when those moderation decisions are applied unevenly and without regard to context.

We suggest that in the best case, ethics inside of technology companies consists of using robust and well-managed ethical processes to align collaboratively-determined ethical outcomes with the organization’s and commonly-held ethical values. This is not an easy task, even when there are clear guidelines and best-practices for producing straightforward products or services. As the industry has moved haltingly toward practices that align values and processes with outcomes, however, it has only increased the number of meanings ‘ethical’ might hold, as processes and procedures that fit inside of their corporate structures refract through the multiple meanings of ‘ethics’ writ large.

Further complicating the situation is the ways in which these meanings of ethics can interact with the different layers inside and outside a tech company. Inside a company, translating values through process in order to get measurable outcomes is a significant technical and organizational lift, all of which in-part or in-whole might be called “ethics.” That effort is translated through an overlapping set of concerns about the legitimacy and profitability of the tech company itself — concerns which have historically been read through the lenses of legal and reputational risk, but are now increasingly understood internally through this new framing of “ethics.”

But outside tech companies, ethics looks very different: the legitimacy of tech companies, their business models, and their socio-political power are read through the lens of moral justice. As our colleague danah boyd recently observed:

“How does a company have values beyond profit for shareholders? Many of the folks on the outside aren’t even talking about trade-offs and values. They want justice. Ethics tends to encompass all of this … from the world of legal risk, all the way to justice. As a result, the people on the outside are not at all satisfied by the ideas we’re going to get from compliance… We’re going to have such contested challenges around this because we don’t know how to articulate values within this form of capitalism.”

From the outside, ethics centers the question of how to hold these organizations with unprecedented power over society accountable for the consequences of their decisions.

Abstract pattern of emerald green lozenges intersecting on black background, version 3; towered and stacked blocks

To make sense of these complexly polysemous meanings of “ethics” in tech, we have identified four overlapping meanings of the word “ethics” among those who use the term most forcefully:

  • Moral justice
  • Corporate values
  • Legal risk
  • Compliance

Questions of moral justice, described by danah boyd above as beyond the realm of possible concern for corporate officers, appear in debates about the “right” thing for a tech company to do. These debates often involve public conversations that make claims upon both workers and corporate behavior, as well as society as a whole. Is a gig economy business model morally justifiable, even if it provides a rationale for disinvestment in much-needed public services? What about under pandemic conditions, when essential gig economy supply chain workers are not even afforded sick leave? Should data gathered under the guise of service improvement or security ever be repurposed for other revenue-generating activities? Although these are the types of questions that critics and the public in general are most concerned about, they are also questions that the internal resources of a tech company — even those tasked with “ethics” — are the least well-equipped to consider and respond to.

Consideration of corporate values can be the basis of well-reasoned tradeoffs between possible design decisions in a tech company. They shape the terrain on which values-laden ethical questions can be addressed, and are often grounded by statements of principles and/or mission and vision statements. For example, should Google accept a contract from the Department of Defense that contributes to the automation of warfare? The answer to that question as determined by Google inevitably requires checking the consequences against the non-economic factors the organization values. As a form of ethics, values guide complex tradeoffs between technical, policy, fiscal, and marketing goals. Corporate values also help develop a coherent organizational culture (and are contested as cultural conflicts within organizations). Product and engineering teams across Silicon Valley are experimenting and reorganizing to increase their ability to render organizational values as specific design decisions, which is a small victory for those who have pushed for that goal. But it is a different goal from that of moral justice, which seeks to hold tech companies accountable for their enormous leverage over how our lives are lived and our society is organized.

As a form of ethics, values guide complex tradeoffs between technical, policy, fiscal, and marketing goals.

Ethics is also intimately bound up with legal risks. Such risks arise from what the law has to say about what is and is not permissible, but for tech development, it is not sufficient to simply say that what is legal is ethical and what is illegal is unethical. From the perspective of data science, for example, a straightforward way of correcting algorithmic bias might be to treat a group that is disadvantaged by an algorithm specially, to make up for the statistical difference. But it also might be illegal for one racial group to receive purposely different treatment on the basis of their race regardless of why and how the treatment differs. Inside tech companies, there are robust and long-standing practices about minimizing legal risk — from contracts, product liability, human resources, and others. But legal reasoning around ethical harms is somewhat novel inside these companies, and ethics owners tasked with a wide portfolio of ethical concerns must also interface with corporate legal teams about the legal risks that intersect with their work. Legal teams are particularly focused on those risks that a company must address in order to stay within the boundary of what is permissible or required by law, or might expose the company to a legal claim of wrongdoing or negligence.

The day-to-day aspects of ethical process in a tech company — such as technical documentation, testing, and review — fall under the heading of compliance. Despite the fact that many people inside and outside of the tech industry will view ethics as something more than, or beyond the scope of, compliance, the mechanisms of compliance are the chief form by which an organization distributes responsibility horizontally (everyone must participate) and establishes accountability vertically (someone must decide). This includes responsibility for and accountability to ethical values. But compliance is typically pragmatically concerned with ensuring all employees follow guidelines designed to minimize exposure to the legal risks associated with regulatory and administrative requirements external to the firm. And so ethics inside of tech companies also looks like a robust compliance process for product teams. A robust compliance process will recognize the simultaneous-but-different kinds of ethical decisions that each engineering choice is, and provide a framework for involving differently-interested actors into such choices at appropriate times.

Unpacking “ethics” as a keyword is a crucial task for those who work toward positive outcomes inside the technology industry.

Compliance, legal risk, corporate values, and moral justice are all sometimes described as “ethics,” and at times work in parallel to pursue common goals, and at other times come into tension with each other. In the best case scenario those goals are complementary. In the worst case, the procedural and internal-facing nature of ethical technology development, and the marketing and public engagement around these capabilities, can operate as “ethics-washing” and obscure the more fundamental societal changes that are needed. However, we argue that the proceduralism and internalism of these processes does not mean that they aren’t valuable. Indeed, the bold goals of those using the language of moral justice are likely impossible without organizations using proceduralism and internalism to render those goals as visible, achievable, and accountable.

Unpacking “ethics” as a keyword is a crucial task for those who work toward positive outcomes inside the technology industry, those who work outside the industry to hold it accountable, and those who use technology as an intimate part of their daily lives. By acknowledging the broad set of practices that are bound up with the doing of ethics, as well as the ethical implications of practices that do not often fall under that term, it becomes possible to see the range of ethics practices in the tech industry, and the different arrangement of actors that are brought to bear by these practices. It also becomes more possible to situate accolades, critiques, and calls to action more appropriately — to focus energy on substantive, pragmatic outcomes without losing our way amidst a terminological thicket.

Emanuel Moss and Jacob Metcalf are both researchers on the AI on the Ground team at Data & Society, and are co-authors of the forthcoming report, “Ethics Owners: A New Model of Organizational Responsibility in Data-Driven Technology Companies.”

--

--

Emanuel Moss
Data & Society: Points

PhD Candidate in Anthropology at CUNY Graduate Center | Researcher for AI on the Ground Initiative at Data & Society