The Work of Literature in the Age of Technological Omnipresence
Introduction: Stranger Than [Science] Fiction — The Age of Technological Omnipresence
The Age of Technological Omnipresence: what does this mean? Thanks to the myriad examples of post-apocalyptic digital futures found in all corners of popular fiction, there’s an inherent air of horror to the phrase, a note of the sinister in the concept of a fully networked present: if technology is all-encompassing, if it has permeated every nook and cranny of our reality, then what room is left for us?
In his groundbreaking book The Stack: On Software and Sovereignty, speculative design theorist Benjamin Bratton considers the dimensions of our present technological moment and formulates the following thesis: in allowing digital systems to infiltrate all facets of our daily lives, we have, “both deliberately and unwittingly,” constructed an accidental technological megastructure that is, “in turn[,] building us in its own image” (“A New Architecture?”). This megastructure, which Bratton dubs “The Stack,” is “a machine that serves as a schema as much as a schema of machines…powerful and dangerous, both remedy and poison, a utopian and dystopian machine at once:” a system whose reach is infinite but whose safeguards are not (Ibid.). In the shadow of The Stack, geopolitical orders are made obsolete, global economies are the playthings of chance, ecologies are consumed and vomited back up ad infinitum, “satellite networks expand the literal circumference of the earth,” and the hopes and fears of the world’s peoples are reduced to quantifiable data, bits and bites of information that can then be processed, analyzed, inverted, recorded, and ultimately, stored for later manipulation. As Bratton explains, “we, the humans, while included in this mix, are not necessarily [The Stack’s] essential agents, and our well-being is not its primary goal…[Its] interests are not human discourse and human bodies but, rather, the calculation of the world’s information and of the world itself as information.”
That last part should sound a bit familiar; after all, it’s more or less the mission statement of the world’s most powerful company, Google. This is no mere coincidence. Consider, for instance, the curious incident that occurred in October of 2010, when, after its engineers accidentally changed the border location between Nicaragua and Costa Rica, Google Maps was used as precedence for Nicaraguan forces to seize land that has been in dispute for nearly two centuries, leading to what many pundits later dubbed “The First Google Maps War” (Merel 442). Consider also that, in a January 2015 press release, Facebook announced that if it were an autonomous country, its population would surpass that of China by some 60 million people, making it the most populous country in the world (Stenovec). According to a report published by The World Bank in early 2016, the average American spends 4.3 hours a day accessing the internet through a laptop or desktop, to say nothing of the additional 1.9 hours a day they spend on the internet via mobile phone (Bauer). As the National Security Agency documents leaked in June 2013 by whistleblower Edward Snowden prove, the digital footprints produced by an individual do not simply “disappear,” but are instead stored in massive underground data centers, the sum of which consume more energy than India, Germany, Canada, France, Brazil, or the United Kingdom (Bratton “From Global Surface to Planetary Skin”). Bratton’s conclusion is inescapable: in our ceaseless quest to map the whole of human knowledge and experience in cyberspace, we have brought about an accidental megastructure whose very pillars of existence are the tools and companies which we now rely on to navigate our increasingly complex lives.
But The Stack’s reach is not limited to our online presence. Indeed, our lives have become so intimately entangled with the infrastructures of digital interfaces, cloud computing, data centers, telecommunications systems, social media platforms, and state-sponsored software programs that the idea of being “offline” has itself become obsolete. This is true even of those who conspicuously limit their time on or reject the internet, those we might consider modern-day Luddites. As digital artists Hito Steyerl observes, “[the internet] has gone all out. Or more precisely: it is all over!…[it] has crossed the screen, multiplied displays, transcended networks and cables to be at once inert and inevitable” (“Too Much World: Is The Internet Dead?”). It may be more accurate to consider ourselves not as analog beings being “pulled” into cyberspace, but rather as already-digital subjects for whom being “online” is not a temporary status but rather a default condition. “We are digital,” notes Stefan Heidenreich, “but it does not matter, because that is just what everyone else is”. Our moment, then, is a Post-Internet moment, a time in which making a distinction between being online and offline is redundant or non-sensical: we have existed online since the internet itself went online, and ever since, we have simply been trying to stay afloat in its digital deluge.
It is not unreasonable, then, to suspect that, in the shadow of The Stack, the way we think has been altered. In one of The Atlantic’s most widely-shared articles of the decade, “Is Google Making Us Stupid?”, Nicholas Carr contemplates how the internet has changed his ability to concentrate, particularly when reading:
Over the past few years I’ve had an uncomfortable sense that someone, or something, has been tinkering with my brain, remapping the neural circuitry, reprogramming the memory. My mind isn’t going — so far as I can tell — but it’s changing. I’m not thinking the way I used to think. I can feel it most strongly when I’m reading. Immersing myself in a book or a lengthy article used to be easy. My mind would get caught up in the narrative or the turns of the argument, and I’d spend hours strolling through long stretches of prose. That’s rarely the case anymore. Now my concentration often starts to drift after two or three pages. I get fidgety, lose the thread, begin looking for something else to do. I feel as if I’m always dragging my wayward brain back to the text. The deep reading that used to come naturally has become a struggle.
As he discovers, Carr is not the only one. Fellow “literary types” in Carr’s circle of journalists report similar experiences, with one acquaintance quipping, “I can’t read War and Peace anymore…I’ve lost the ability to do that. Even a blog post of more than three or four paragraphs is too much for me to absorb. I skim it.” And the early quantitative research backs up these qualitative anecdotes: according to a five-year study of online research habits conducted at University College London in 2008, the average online researcher “[reads] no more than one or two pages of an article or a book before they…‘bounce’ out to another [source]” (Carr). “It is clear,” states the report, “that users are not reading online in the traditional sense; indeed there are signs that new forms of ‘reading’ are emerging as users ‘power browse’ horizontally through titles, contents pages and abstracts going for quick wins. It almost seems that they go online to avoid reading in the traditional sense” (Rowlands et al). In altering our habits to accommodate our media, Marshall McLuhen’s prophetic theorization rings eerily true: the real cultural change produced by a new medium is never the result of the medium’s content “because the content is always [that of] the old medium” (“At the Feet of the Master”). Instead, the real change comes from the new medium itself, which “creates a new situation for human association and human perception” (Hammond 4).
Carr’s article sparked a massive online debate upon its online publication, with cultural critics and concerned citizens alike lamenting the reality of our technological moment. However, not everyone was so quick to light modernity’s funeral pyre. In a direct response to Carr’s essay titled “Why Abundance is Good: A Response to Nick Carr,” media theorist Clay Shirky argues that the societal advances made possible by the democratic, anti-hierarchical nature of the internet far outweigh the “threat…that people will stop genuflecting to the idea of reading War & Peace.” In Shirky’s mind, Carr’s article is not about the doomed trajectory of deep thinking, or even reading, but is instead about culture: who gets to produce it, who gets to decide what work is valuable, and what happens when we collectively decide that the old vanguards of haute culture, like War & Peace, are no longer “Very Important in some vague way.” “The essential fact of Luddite complaint,” writes Shirky, “is that it only begins after a change has already taken place…so Luddites [like Carr] are mainly harmless whiners…The real problem is elsewhere; Luddism is bad for society because it misdirects people’s energy and wastes their time.” And what is the purpose towards which we should be directing our energy instead? “Nostalgia for the accidental [information] scarcity we’ve just emerged from is just a sideshow; the main event is trying to shape the greatest expansion of expressive capability the world has ever known.”
Of course, the history of media is long and filled with scathing jeremiads from both early adapters and Luddites alike, and we can expect the debate over the internet’s utility as our medium of choice to be no different. However, Shirky, despite being quickly labeled by his peers as a philistine, raises a compelling argument: whether the internet is “good” or “bad” for us is besides the point; it is already changing our preconceptions of culture, appreciations of art, and abilities of concentration so rapidly that — if we refuse to reorient ourselves quickly enough — we run the risk of falling for the “sideshow” and losing something of intangible human value. In a direct response to Shirky titled “A Know-Nothing’s Defense of Serious Reading & Culture,” Sven Birkerts, whose work focuses on the decline of literary culture since the inception of the internet, considers the existential risks that are made possible when we view reading as solely an act of information transmission and allow ourselves to be absorbed by the “hive-mind” tendencies of cyberspace: “I prize a sense of inhabiting my self-constituted boundaries as a distinct ‘I’…I fear that the steady centrifugal pull of the internet blurs me…[and] makes it harder for me to achieve the subjective distinctness I am after [and which I get from reading literature]”. In the shadow of The Stack, then, the discrete boundaries by which we map the beginnings and endings of our Self have begun to disappear, and this is why serious literature — despite Shirky’s allegations of unwarranted elitism and our own individual predilections towards certain mediums of expression — is worthy of the highest attention we can muster: if we do not begin to produce thought-provoking, audience-engaging literature that fits the Attention Deficit Disorder-ridden contours of The Age of Technological Omnipresence while still remaining artful, we will lose a certain part of what makes us human.
Luckily, because of the unique potential generated by The Stack itself, our situation is not as dire as it may appear, and no one seems to understand this better than the technologically attuned British novelist and artist Tom McCarthy, who has been called “a Kafka for the Google age” by The Telegraph. In an article published in The Guardian, McCarthy considers the notions of the modern-day “corporate anthropologist” and the aforementioned technological intertwinement that is the reality of our place in The Stack and posits that, “[f]ar from being unwritable, the all-containing Great Report (or Great Novel or Poem) is being written around us all the time — not by an “anthronovelist” but by a neutral and indifferent binary system whose sole aim is to perpetuate itself, an auto-alphaing and auto-omegaing script” (“The Death of Writing — If James Joyce Were Alive Today He’d Be Working for Google”). As we will see in the following chapter, McCarthy theorizes the historical nature of literature to be particularly amenable to the digital formate: our challenge, then, the challenge posed to all who see something of value in literature, is figuring out how to read it, how to parse the Great Report and reconfigure it so that, despite being the product of The Age of Technological Omnipresence, a product of an infinitely self-replicating system, it retains some semblance of individuality, and thus humanity.
In a 2013 address to the Council on Foreign Relations, outgoing Secretary of State Hillary Clinton made the following remark:
The pillars [of the old geopolitical order] were a handful of big institutions and alliances dominated by major powers. And that structure delivered unprecedented peace and prosperity. But time takes its toll even on the greatest edifice. And we do need a new architecture for this new world, more Frank Gehry than formal Greek. (Shwayder)
This thesis, then, takes Clinton’s words as fact and extends them to the realm of poetry: the first chapter proposes that 20th century literary theory makes the literary form particularly amenable to new technological innovations; the second chapter considers whether computer code itself is a form of literature, and, if so, whether it is capable of delivering an aesthetic experience to its audience; the third chapter, finally, endeavors to engage a small sample of exemplary digital literature projects with a critical language that is appropriate for the new multi-dimensional genre.
In the unprecedented time in which we live, when being online is our default state and the world is quite literally being rearranged by the silicon we carry in our pockets, we need a new architecture of literature itself, one that respects the traditions that have come before it while simultaneously extending those traditions towards exciting new possibilities, a literature that is more “Star Trek holodeck” than War & Peace.