Giuditta che decapita Oloferne, Artemisia Gentileschi, 1620–1621.

This week, both Apple and Google have come under fire for continuing to allow sales of a Saudi government app which will enable men to track and manage the women under their guardianship from their app stores. This app effectively makes it easier for men to monitor and restrict the activities of their female relatives, including constraining their ability to travel freely. At the same time, the humanitarian sector is alarmed by the World Food Programme’s decision to engage the data analytics firm Palantir in a 5-year deal.

A New York Times article last year accused academia of being asleep at the wheel when it comes to technology and ethics. The push back was immediate, with scholars pointing out that Information Science and Science, Technology, and Society (STS) programs have been focused on the effects — positive and negative, intended or not — of technology on society. Going a step further, CU Boulder Information Science professor Casey Fiesler has crowdsourced a list of college courses which teach ethics as part of their science and engineering classes. The problem, Fiesler and others have argued, is that academy was never invited on the road trip and that the disciplines which study the interplay of society and technology are seldom those who train engineers, developers, and tech entrepreneurs.

I’m sympathetic to this argument and have made it myself in the past. However, tech ethics conversations frequently make a host of assumptions about what ethics is and is not capable of doing. At best, this creates confusion, at worst, it allows ethics to be instrumentalized by those organizations whose behavior is the object of our criticism. As a philosophical discipline ethics is far from monolithic. It is a site of intense disagreement and debate, which makes the question “whose ethics” one of profound importance. Compounding this is the tendency to reduce or misdirect conversations about ethical concerns into discussions about compliance and technocratic policy, which elides the true aim of normative ethics: is this action morally right or wrong.

In a recent Financial Times Alphachat, the Columbia University historian Adam Tooze pointed out that the Davos crowd lacks a social theory, due to the technocratic assumption that policy has replaced politics. However, this assumption isn’t limited to Davos. Last October, I moderated a panel at the German-American conference at Harvard on the topic of disinformation and Germany’s approach to trying to regulate the problem. At one point, I asked how we regulate something as global as the internet, which spans states and polities with very different polities, some hostile to democracy, while encoding it with a desirable set of values. The pushback was immediate: regulation should be neutral to politics and values.

This is hardly unusual: With many conferences panels I’ve attended, participated in, or moderated on topics of technology in the humanitarian sector or disinformation in society, the conversation inevitably raises the dangers at hand while neatly skirting questions of power or responsibility. When it comes to accountability, the default assumption is that frameworks or bloodless regulations which don’t constrain anyone will solve the problem. When conversations about tech ethics are treated this way, they all start to sound a bit like Eddie Izzard’s Church of England routine.

The problem with this worldview is that in the absence of politics, it is impossible to contest questions of power and values. The technocratic impulse reduces conversations to those of compliance around neutral topics, such as infrastructure and technology, without honestly examining whether the technology is socially desirable, or raising questions of who benefits by constraining the discourse so narrowly and perpetuating the political economy of technology as it exists.

Consider that Silicon Valley’s founding ideology is a blend of technological utopianism, libertarian thought, and free-market economics. The merits of this can be critiqued and debated, but its adherents would be justified in arguing that they already have a set of ethics. Critics are bound to be disappointed when this leads to schools and firms teaching hard libertarian ethics (such as Nozick.) Similarly, efforts to teach multiple ethical worldviews will ultimately fall short. If calls for tech ethics which are implicit critiques of the way things are now, then only offering an array of options with some mild editorializing about what is right, the most likely outcome is confirmation bias.

The World Food Programme’s decision to enter into a partnership with Palantir invoked the language of corporate efficiency and compliance. Indeed, the tool is to be used for a range of activities aimed at affecting cost-savings, including fraud detection (what that means for the on-the-ground politics of aid is an open question.) Palantir, of course, is not particularly discriminatory in its clients, and its relationship with governments raises some particular risks for the sector, but the problem runs much deeper than the humanitarian sector’s relationship with any one firm.

In his History of Sexuality, Michel Foucault laid out the concept of biopower. He argued that the biopolitics of the population and the discipline of the individual human were “two poles of development.” The first focused on the body, its disciplining, its optimization, and its integration into systems of economic efficiency. The second, focused on control over the population, through interventions and regulatory controls. These, he argued, were linked together by “a whole intermediary cluster of relations.”

The promise and peril in our modern age are that technology seeks to make legible, manipulable, and commodifiable these clusters of relations in a way the administrative state Foucault wrote about never dreamt of. Perhaps intended with the best of intentions, founded out of California counterculture’s belief that digital utopia was achievable, though always questionable from the perspective of human autonomy, it never accounted for politics and power. Absent that, they’ve become tools of surveillance capitalists and tyrants who seek to assert control and dominance over populations.

In a sidebar conversation during a lengthy conference call, a colleague once remarked that different organizations will have different priorities vis-à-vis technology and that they won’t always be rights and ethics. I was taken aback, but he was not wrong. If tech ethics is applied in other sectors as it is so often in my own — through workshops targeting middle management, structured around design thinking, and driven by a compliance mentality — while organizational leadership invites in firms like Palantir, then we may just be rearranging deck chairs on the Titanic. If this is tech ethics, then tech ethics is a dead-end.

Perhaps we should let it die.

Real dilemmas of practical ethics are rare but do exist, perhaps particularly in humanitarian action. Those of the nature of “I’ll be damned I do, and the patient will be damned if I don’t,” such as knowing that a young woman and her fetus will die without a medical procedure, but also knowing that she’ll defer to her husband’s refusal to consent on her behalf — as is culturally appropriate. Too often the dilemmas with technology are not dilemmas at all. They may be framed as such, but there is most often a clear right and wrong answer. The only problem is that the right answer is often not that which preserves the status quo. In a damning indictment of Sheryl Sandberg and Harvard Business School’s ethical compass, business journalist Duff McDonald writes in Vanity Fair:

“[Bowen] McCoy was on a trip to the Himalayas when his expedition encountered a sadhu, or holy man, near death from hypothermia and exposure. Their compassion extended only to clothing the man and leaving him in the sun, before continuing on to the summit. One of McCoy’s group saw a “breakdown between the individual ethic and the group ethic,” and was gripped by guilt that the climbers had not made absolutely sure that the sadhu made it down the mountain alive. McCoy’s response: “Here we are . . . at the apex of one of the most powerful experiences of our lives. . . . What right does an almost naked pilgrim who chooses the wrong trail have to disrupt our lives?”

McCoy later felt guilt over the incident, but his parable nevertheless illustrated the extent to which aspiring managers might justify putting personal accomplishment ahead of collateral damage — including the life of a dying man. The fact that H.B.S. enthusiastically incorporated said parable into its curriculum says far more about the fundamental mindset of the school than almost anything else that has come out of it. The “dilemma” was perfectly in line with the thinking at H.B.S. that an inability to clearly delineate the right choice in business isn’t the fault of the chooser but rather a fundamental characteristic of business, itself.”

Choosing to host an application which extends a man’s control over a woman in a near-chattel like relationship backed up by the violence of the state is not an ethical dilemma. Allowing anti-Vaccination grifters to target women interested in pregnancy is not a dilemma. Neither is banning porn while allowing white nationalism to thrive, stonewalling a UN investigation into genocide, and investing in the facial recognition technology driving China’s ethnic-cleansing of the Xinjaing region. These are choices reflecting a values system and justified by market access and shareholder returns.

What then, is left, if these are the choices firms make? We must consider our values. Not capital V, McKinsey’s-14-Values, but our personal values, and what we expect from the firms we work for, buy from, and hire. However, markets can only go so far, and it’s unreasonable to expect any constituency of users affecting a firm with two billion monthly users, particularly in an era when protest is almost immediately commodified. Breaking these firms up will only guarantee there is more competition selling the same products. This perpetuates the status quo.

The answer may be more radical still: do we believe that these choices are compatible with human rights and human autonomy? If not, do firms have a right to make them? In The Network Society, Manuel Castells writes, “[u]ntil we rebuild, both from the bottom up and from the top down, our institutions of governance and democracy, we will not be able to stand up to the fundamental challenges that we are facing.” Perhaps then, it is our responsibility to begin rebuilding, and rebuilding begins tearing down the rot. Certainly, Silicon Valley has lost any credible claim to being able to self-regulate, as do those who buy into their Faustian bargain. Therefor, it is time to impose an uncompromising set of values on technology firms and the capital behind them.

This is not a call for popular, populist values: those too often reflect the id of a society. But if there is anything worth saving in liberal internationalism, if won’t be found in bloodless transactional technocracy, without conviction or politics, bound to the ideology of unregulated markets and efficiency. This serves the interests of the status quo over everything else, and more often then not, sides with essentialist, far-right, or authoritarian world-views when confronted with calls for change. Rather, it is a call to embrace the problem as one of politics, and to take a stand.

Personal musings on technology and society. Some cooking.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store