On Metamodernism

Seth Abramson
15 min readApr 16, 2018

--

David Foster Wallace, who in 1990 equated artistic rebellion with being laughed at.

As a kid I was a loner. Not shy, quiet, or reserved — a loner. I wanted to be alone. I’d decided that to be seen in public was to be reduced in some way.

Many years later, I read Edward Dorn’s post-hippie verse epic The Gunslinger. In it, Dorn equates naming something to killing it. I suppose that’s how I felt about being around other people in the 1980s — like it was actually killing me. In public I was a Jew, a Polack, short, and a host of other things which at the time were pejoratives. These were elements of my identity that I never really thought much about when I was alone, but was constantly reminded of in company. In the 1980s in rural Massachusetts, a Jewish boy was considered by his peers to be non-white, and anyone of Polish descent (which I was) an imbecile. Being short was likewise presumptively bad, as if there’s one thing boys in the 1980s knew, it was that “real” (masculine) men weren’t short.

So I kept to myself outside of school. I had a few in-school friends, and a few extracurricular activities that my mother — fearing I was turning into myself, like a nautilus — pushed me into. What I really wanted was to be in my room with the door locked. Alone, unknown, unnamed — and therefore anything or anyone. I associated self-confinement with freedom. I equated a locked room with space. I read books and played videogames that offered portals to other worlds far larger and more dynamic than my own. The world felt small to me because the world made me feel small. I think many children feel this way — though perhaps not so many react to it the way I did, or for such a long time.

Towards the end of my high school years, I went on the internet for the first time. I quickly learned that the internet was like an immersive game: you could be anything and anyone you wanted — and you could do anything you desired or invented — because it was all virtual. Almost immediately I did things like pretend to be female or pretend to be older or younger than I was. It wasn’t because I was confused about my sexuality or upset by my age, it was that I’d finally found a world in which I could name myself and change that name whenever I began to feel in any way reduced by being in the world IRL.

In 1994 I went off to college — and without having planned it, ended up at one of the most wired campuses in the country at the time. Dartmouth College in the 1990s was a wild place to be a freshman, as you were required to buy a computer, everyone emailed instead of using the telephone (even for basic conversations) and everyone was hooked up to an internal college-wide server that allowed you to see what peers had on their computers. All you had to do was click a button and everyone on campus could see whichever of your files you wanted them to see. I was obsessed with seeing what others had on their computers — mind you, not because I really expected or wanted to find anything untoward, but because it was a way of “connecting” with other people without having to meet them in person. It was a form of socializing, or so I think it must have seemed to me at the time. And suddenly the internet was not just a place as big as you needed it to be, where you could be whoever you wanted, but also a place in which you could interact with others without any of the complications of being “seen.” Which was fortunate for me, as by my freshman year of college I’d started to gain a little weight, lose my hair, wear glasses, become aware of how little I understood about fashion or — far worse — actual, face-to-face socializing. I’d “skipped” those formative years.

I don’t think my story is unique. I think many people — some shy, some quiet, some reserved, and some, like me, willful loners — discovered the internet in the 1990s or early 2000s, and saw in it everything they thought they wanted. And I suppose it’s equally unsurprising that I, much like everyone else who thought this way in the early days of the internet, turned out to be wrong and ultimately came to resent the internet (and myself) for how wrong I’d been.

For me, things started going wrong in 1998, the year I started law school. By then I had long since ceased pretending to be anyone else online or having much interest in looking through anyone else’s stuff. I went by my real name online and spent a great deal of time in online poetry workshops. What I discovered, soon enough, was that certain people had taken the Wild West ethos of the internet’s early years into the civilized online spaces of the late 1990s. Certain online poetry workshoppers went by fake names, and these men and women were significantly more likely than others to get into arguments, say vile things to other workshoppers, eschew basic niceties and even, on occasion, wreak system-wide havoc and intentionally. I saw that the internet was a great place for a certain type of kid to “find himself,” but that this dynamic wasn’t quite so endearing when it came courtesy of a sixty year-old ex-Army paraplegic alcoholic from Kansas (as one of the workshop’s most disruptive personalities happened to be).

A couple bad romantic relationships with poets I’d “met” in the workshop followed, and a blog about poetry that started fine but began getting me in trouble the moment my opinions about contemporary poetry diverged from those of my peers. In the workshop, there was little room to do anything but critique poetry; once I began writing a blog, people could see my thoughts in long-form, and sometimes they found my thoughts disagreeable and said so.

I began blogging in early 2004. By 2005, I’d learned that having an opinion IRL was substantially less dangerous than having an opinion on the internet. On the other hand, I also learned that, at least in 2005, it was possible to start your own political website and within 90 days be cited by Rolling Stone for your original investigative journalism. (What the internet taketh away, the internet giveth, and vice versa.) My political blog — I had both a personal blog and a political blog — was the first blog ever listed on Google News, at least until readers of Michelle Malkin’s blog, and Malkin herself, complained. Within a few months of its creation, the blog was getting upwards of 25,000 visitors a day. All those visitors meant more online interactions than I’d ever had before, and since those interactions involved politics, many of them were deeply unpleasant and even unsettling. I was finally self-defining; my words were being read by tens of thousands; I was pursuing my interests, and in a way that others found interesting; and even so, I was deeply unhappy.

I think most metamodernists become metamodernists when they realize that the internet is a place where you can become anyone you want to become — but also a place in which people reduce you to your data in a way that’s soul-crushing. In the era of blogs, it was a question of how many blog-lists you were on, and how much traffic your stat-counter said you had, and whether people were linking to you or not. Even in the tiny ecosystem of poetry-blogging, there was a hierarchy. In the world of political blogging, it was even more clearly so. It became harder and harder to distinguish what one said from the bare fact of how many people heard one say it, as well as which people were willing to acknowledge publicly that you’d said it. It all stank.

And this was before Twitter and Facebook.

I created my first Twitter account a few years after the platform started up, then deleted it in 2014, when it had 2,800 or so followers. I’d realized that I was only using the platform to remain connected to the poetry community — which as a community is about as sophisticated an ecosystem as a high school cafeteria. In other words, I was constantly thinking about how my feed fit into a larger network that I didn’t really respect or want anything much to do with in the first instance. And though it took a while, I ultimately got wise enough to my own unhappiness to kill it.

The Twitter feed I have now, with nearly half a million followers, is my second Twitter feed. I created it in 2015 only after I made a contract with myself that this time I wouldn’t care what others thought of what I wrote, if anyone at all followed me, or whether my obsessions were of any interest to anyone else. That it’s this feed, not its predecessor, that has become popular is something I never could have expected. (Perhaps metamodernism had something to do with it; you may or may not think so once you’ve read the rest of this essay.)

All of which is to say that postmodernism gave us deconstruction, and that that was a good thing until it was a bad thing. Deconstruction is merely a thought experiment in which one acknowledges (a) how many different ways one can see a given problem, and then (b) the many different ways that those many different ways are in direct conflict with one another. Put tens of millions of Americans who grew up in a postmodern culture — that would be all of us — into a space in which you can frictionlessly insert your own way of seeing into the slipstream of humanity, and you get exponentially more conflict than you would have gotten otherwise. Everyone realizes how stupid everything and everyone else is, even as small numbers of us work diligently to find either (a) an ultra-tiny thought-community in which we can regularly feel comfortable, or (b) a niche community in which only one topic is ever discussed. And since postmodernism also teaches us, helpfully, that everything is always-already “zero-sum,” every disagreement feels like a personal attack on our very lives.

Metamodernism is an evolution of postmodernism, and it comes from people who acknowledge how terrible and fractured everything and everyone is — a knowledge that’s the sum and substance of our postmodern inheritance — but who also still see the internet as a place of boundless self-creation, unfettered problem-solving, limitless invention, and more opportunities for collaboration than humans have ever had. To be a metamodernist is to adopt what’s called a “romantic response to crisis,” and to do so by trying to see and use the whole of the internet’s field of information. A metamodernist posits that by making use of our idiosyncrasies rather than hiding them or pitting them against those of others, we can arrive at better solutions than we ever would’ve alone. It’s an optimistic philosophy, but it’s a hard-won optimism that’s often called, by metamodernists, “informed naivete.”

Informed naivete is knowing your optimism is naive — but plowing on anyway.

A metamodernist poet might combine her own words and the words of others in an original composition, using forms taken from music — like the “remix” and the “mash-up” — not to underscore how fractured everything is, but to emphasize how whole everything can be made to seem. A metamodernist professional writer might construct a CV that includes not just the usual information but also information that (while 100% accurate) is contradictory, irrelevant, boastful, slight, or opaque — a way of showing that things are still okay when we allow ourselves to be the totality of what the hard data says we are, and also whatever beyond that totality the mere act of acknowledging it frees us to be. A metamodern social media user might deliberately misuse a social media platform on the theory that being authentically oneself in a way that is ostentatious and perhaps even a bit annoying is more likely to be an effective form of communication and activism than making oneself smaller by playing by the rules. A metamodernist scholar might reconfigure a “failing” discipline like English by reducing it to its operative skill-sets — say, in English, critical thinking, creative thinking, oral and written communication, and the ability to collaborate dialogically — then using those skill-sets to “reconstruct” the discipline as much more expansive than it ever was before. A metamodern journalist might look at all the polling data in a given primary and ask, “How many different wholly accurate metanarratives is this ‘field’ of data offering us?” Whereas in postmodernism only one metanarrative can be considered accurate at a time, metamodernism so dramatically expands the field of hard data to be considered that it often divines multiple accurate metanarratives where postmodernism would either (a) see only one, or (b) allow only one to exist. (The “dialectical” thinking of postmodernism tends to pit ideas against one another in a combat to the death from which only one idea can emerge.)

Where did metamodernism come from? Nonwhite activists-cum-scholars writing in obscure American literary publications in the 1970s, 1980s, and 1990s. Most of these activists were Muslim, African-American, or Latino neo-Marxists who’d come to realize that postmodernism doesn’t actually help you achieve anything: it deconstructs systems without showing how to reconstruct them. These skeptical postmodernists realized that without a new cultural philosophy, progressive activists — and nonwhite activists in particular — would be forever shooting themselves in the foot by always deconstructing and never reconstructing the complex human systems they sought to change.

I came to metamodernism in 2013, at a time I’d had my fill of postmodernism and, in particular, postmodern poetry. I’d spent 2012 doing more reviews of contemporary poetry books than anyone in America, I think — writing for The Huffington Post, I reviewed about 100 books of poetry — and I was looking for a way of writing that would do more than merely recreate reality as it was. I wanted my writing, both my creative writing and my professional writing, to innovate. And like the earliest metamodernists, I wanted those innovations to be influential in moving progressive communities forward. In poetry, that meant publishing a trilogy of metamodern poetry books that substantially expanded what could be considered poetry and what material could be seen as a credible building-block for poetry; I believed then, and still believe, that expanding poetry in these two ways can bring American poetry back to a position of political consequence and meaningful influence in American life.

Soon thereafter, I began using metamodernism as a tool in my professional writing. Doing so allowed me to predict much about Donald Trump’s political career on the very day he announced his presidential run — and to predict, too, though I was sad to say it, that Clinton might not be able to defeat Trump in 2016. The reason for this wasn’t that Clinton wasn’t qualified; or that she wasn’t a noble, talented, intelligent, and accomplished person; it was that she wasn’t as metamodern in her cultural positioning as were Sanders and Trump. That wasn’t at all her fault — but it also meant that this wasn’t the right time for the particular type of campaign she wanted to run. I supported Sanders in the Democratic primary partly because I believed he had a better chance than Clinton of defeating Trump, and partly because I saw that polling “internals” revealed that Clinton was an even weaker candidate than many feared. The articles of mine that people today misquote (as they’ve only read their titles) as having said that Bernie was going to defeat Clinton in superdelegates — he wasn’t — are actually about polling internals and the real risk that Clinton could eventually lose to Trump. Which, to the evident detriment of the United States and the world, she ultimately did. It’s a bit bizarre to be taken to task, in 2018, for having written in 2016 articles whose content was accurate and predictive, but that’s a big part of being a metamodernist — taking flak for it.

(As a side note, I ultimately published an article endorsing Clinton during the Democratic National Convention, and I voted for her on November 8, 2016.)

One of the first metamodernists, David Foster Wallace, famously said in 1990 that the next real literary rebels in America would be artists with little interest in trying to shock or upset their peers but who were, rather, willing to become so credulous of everything in the world that their peers would laugh at them. Wallace, a vocal opponent of postmodernism, thereby launched a thousand metamodernists on trajectories that would, in fact, lead to many laughing at them in public. And I should know, as I’m now one of the people laughed at. On a regular basis people will come across my lengthy Twitter threads, or my obsessively curated CV, or my over-earnest social media presence and assume — as any postmodernist would — that only one of two possibilities could explain my seeming tone-deafness: that I’m an irredeemable asshole or an embarrassing tool. (Or both, I guess. Postmodernism encourages us to pretend, sometimes, that two different things are actually the same thing).

Wallace specifically, and metamodernism broadly, held that metamodern writing, whether creative or professional, wouldn’t shock people so much as either (a) annoy the hell out of them, or (b) be intensely engaging to them. The presumption, moreover, was that the form of metamodern writing would also be unrecognizable as either “conventional” or “experimental.” Instead, it would look nothing like it was supposed to — a metamodern poem wouldn’t look like either a conventional poem or a postmodern experimental poem, and a metamodern Twitter feed wouldn’t look like either a conventional feed or what postmodernists would consider an experimental one. Rather, it’d just look like someone trying too hard, or else being too earnest, or else not at all understanding how other people saw them. Or it’d look like the opposite of that: a cynical, studied manipulation of public appearances that was odiously obsessed with how it was received. But the reality would be something else.

My Twitter feed is what’s called “metajournalism” — a metamodern form of post-internet digital journalism that acknowledges that we all have too much information to process daily. Instead of despairing, however, metajournalism tries to find a way to see and use all available information to make better journalism. How does metajournalism do this? By seeing and using the entire field of information on a given subject — say the Trump-Russia investigation — whether the information comes from the United States or elsewhere, from print media outlets or digital outlets, from a verified Twitter account or a New York Times article from a decade ago. The idea is that, as long as all of the information that is seen by the metajournalist is accurate, it can be used to reconstruct false narratives into accurate ones. We’ve already found this to be true time and time again in the Trump-Russia investigation: conventional journalists, pressed for time and faced with a tsunami of digital news sources, are unable to see the entire field of information on the topics they’re writing on before they write about them. The result is that they miss information or connections that many of their readers — having seen a different part of the field — don’t. Unfortunately, when this happens many readers presume bad faith on the part of the journalist. “How could you have gotten that fact wrong!” they cry. “Either you’re biased, or you’re a liar!” By comparison, the metajournalist presumes good faith on the conventional journalist’s part, while acknowledging that conventional journalism makes it inevitable that any given reporter will miss a substantial sector of the field of information they’re working within.

Decades on from being that loner in a locked room, I’ve learned that I can’t control what anyone else thinks, perceives, or feels. Metamodernism is going to seem like a parlor trick to anyone who wants it to be that, and particularly to anyone wedded (whether they are aware of it or not) to the cynicism, irony, and dialectical thinking of postmodernism. And there will always be those who disregard critical theory altogether because it’s something they don’t enjoy or don’t understand. So be it. Many others will say that it doesn’t matter a whit whether your creative and professional writing is undergirded by post-internet cultural theory if your Twitter feed or what I perceive to be your online persona annoys or offends me. I think that’s fine; the purpose of this essay, much like the purpose of metamodernism itself, is merely radical transparency. Indeed, the point is to capture both the sincere and the cynical components of transparency, as transparency means revealing everything in a given “field” — not just what we’re comfortable sharing. (The internet isn’t so kind.)

If, and I know this is unlikely, you ever come across one of my three books of metamodern writing — Metamericana, DATA, and Golden Age — you’ll find that on the back of the first I include insults thrown at me alongside glowing blurbs; on the back of the second I include a bevy of embarrassing personal details about myself; and on the back of the third I include an excerpt from a “remix” project that failed so terribly it led other poets to falsely accuse me of condoning mass murder. (And it’s hard to fail harder as an artist than that, I think.) I mention this as a way of saying that it doesn’t so much matter to me anymore whether anyone reads my work, or enjoys it, or admires it. All that matters is that I be allowed to name myself, name my work, and name my intentions — whether or not anyone chooses to like me, to admire my work, or to respect my intentions. All I want is to exist within a field (call it an internet) and to do so honestly, fully, dynamically, and transparently.

The dominant cultural paradigm of the internet era isn’t postmodernism (which is often associated with the on/off and performer/viewer binaries of television) but metamodernism. I equally believe it could take a very long time for this to be universally seen, and that in the meantime life is going to be very hard for metamodernists — especially those who don’t realize yet that that’s what they are. If this essay has done anything to increase the common stock of understanding of metamodernism, especially if it’s done anything to help even a single young metamodernist see themselves for what they are, I’m happy. Or, I mean to say, both happy and unhappy — as it’s the digital age, of course, so all is, in a sense, both awful and fractured and hopeful and new. But I’m happy, at least, to cast, if I can, even a sliver of the narrow ray of hope that’s meant so much to me for so long.

--

--

Seth Abramson

Attorney; professor at UNH; freelance journalist (Washington Post, Dallas Morning News, and others).