On a recent episode of Exponent, a podcast about technology and society, Ben Thompson and James Allworth argue that the dynamics of the internet are an amoral force. And, moreover, that “technology inherently is amoral.”
Their thinking goes like this: the internet can seen as simply a technology that reduces friction. The internet enables stuff to move “much easier, much faster, and much further.” Compared to the analogue world, it is defined by a removal of marginal costs, transaction costs, and geographic barriers. They see this loss of friction as not inherently good or bad, but a change in context. Data collection has shifted from being sparse and difficult to being plentiful and easy. This new context leads to “hugely positive” and “hugely negative” outcomes.
Thompson and Allworth speak insightfully on technology strategy, but their casual affirmation that technology is amoral perpetuates a common but problematic view of technology. Framing technology as a passive, neutral vessel ignores the plethora of decisions, value judgements, introspection, and debate between the people involved in technology’s design and development.
Technology is arguably is the opposite of what they claim: perfectly moral. We actively construct scripts for technology during the design process. This script encodes intent, morals, and values — the rules by which we expect the technology to faithfully follow when it is deployed. Bruno Latour put it simply: “no human is as relentlessly moral as a machine.”
Latour used a benign example of a door-closer to make his point. Door-closers faithfully enforce the ideal that spaces should remain sealed off from the outside. They are much more reliable than the traditional porter who opens and closes doors for guests. The porter, who you might still see at upscale London hotels or Shanghai malls, may leave their post to use the restroom, or get distracted by something nearby. Door-closers show no discretion, and no nuance, when deciding whether to leave doors open or closed. As a more contemporary example, the lines of code that go into “smart locks” enforce a more complex set of rules around the access of spaces.
The internet, which Thompson and Allworth call morally neutral, enforces one model for how people and machines must digitally communicate.
Our experience of the internet we know and love (and criticize) was not an inevitability. Rather, it was the result of moral judgements that shaped its architecture. We should not forget that ARPANET, the internet’s predecessor, was an academic research project funded by the US Department of Defense. TCP/IP—the protocol of the internet—was formalized around the needs of its benefactors and vision of its creators.
For instance, the internet was designed to be ungoverned and ungovernable, with few rules to govern behaviour. It was originally intended for non-commercialize use. It relied on the good etiquette of its users. An MIT handbook humorously warned that “sending electronic mail over the ARPANET for commercial profit or political purposes is both anti-social and illegal. By sending such messages, you can offend many people.”
Moreover, the internet on the assumption that its users were trustworthy. It cloaked users in relative anonymity by identifying them only by IP address. It used limited security. The internet was founded on social protocols instead of security protocols.
Yet, despite its lack of regulation, the internet regulates how developers and users engage with it. In Code is Law, an essay fittingly publishing on January 1, 2001, Lawrence Lessig summarized how early design decisions have shaped our experience of the internet today:
[The] regulator is code — the software and hardware that make cyberspace as it is. This code, or architecture, sets the terms on which life in cyberspace is experienced. It determines how easy it is to protect privacy, or how easy it is to censor speech. It determines whether access to information is general or whether information is zoned. It affects who sees what, or what is monitored. […] This code, or architecture, sets the terms on which life in cyberspace is experienced.
There are alternative internets that never were, which would have resulted in alternative cyberspaces. From 1980 to 2012, the Minitel network brought online services to millions of households across France, which included banking, messaging, gaming, and dating. In the 1960s, Project Xanadu envisioned an alternate way of navigating online repositories of the world’s knowledge.
One can only speculate on the implications of alternative architectures, but one thing that’s sure is that the issues explored in Exponent would be quite different. Yet again, podcasts might not have even existed.
Thompson and Allworth are correct insofar that a single technology be used for positive or negative purposes. They also accurately describe the qualities of the internet. However, Exponent may better achieve its intent of helping listeners make better strategic decisions by encouraging listeners to question the morals built into technology. Doing so can support making better design decisions about future technologies.
Instead of calling technology amoral, we should call technology a script that puts certain morals on autopilot. Ultimately, we have the agency to determine this script.
A link to the Exponent podcast. Check them out! I’m a big fan.