Future Failure

Adam Elkus
9 min readApr 15, 2015

--

A quote from a recent piece on biohacking that got me upset:

And as Meyer pointed out, it’s not so hard to imagine the general population feeling hesitant about a transhumanist present “It’s interesting: whenever you have science fiction that depicts a future where you have people able to enhance the functionality of their bodies with technology, it’s always dystopian. It’s never a good future. It’s always a bad future,” Meyer said. “I wonder why that is.”

So, if I understand Meyer correctly, the following statements hold:

  1. The clustering of dystopian science fiction works that deal with body enhancement is a signal of bad outcomes when biohacking occurs.
  2. It is an accurate gauge of public sentiment about future biohacking.

Perhaps that may be true. But there are prominent reasons why it may not, most of which being that you can explain genre trope clustering through far more pedestrian explanations and motivations. And the error that Meyer makes in general is endemic to people trying to use science fiction and popular culture to make predictions or scenarios about the future — they ignore the role of herd behavior in futures and downplay actual knowledge about the technology, science, and social elements needed to do good futurism.

First, dystopian futures in general are popular. With a few prominent exceptions, the kind of optimistic and progressive sci-fi of the 1950s and 60s is not written anymore. So let’s insert something else into this sentence: “whenever you have science fiction that depicts a future where you have people able to communicate with each other using computers, its always dystopian,” the implication would be transparently absurd. No, the ‘Net has not been a panacea. No, software hasn’t gotten rid of all societal ills. But the mere presence of computing in cyberpunk in the 1980s was not some harbinger of doom — surely a lot of people like using computers and are willing to make people like Bill Gates and Steve Jobs rich because of it. Alternatively, one could also look at another feature of 1980s science fiction: the ever-present Asian influence (Blade Runner’s half-Japanese city, for example). As it turned out, Japan did have a cultural influence (if not the one that sci-fi predicted) — anime. Just by being in a dystopian sci-fi, does that really say anything about the form or desirability of Japanese cultural influence?

Meyer goes for the cheap shot because he knows that his audience associates something new, transformative, etc as bad if it is framed as part of a dystopian future. But its worth remembering that biohacking and human enhancement actually has been portrayed in a diverse amount of ways in dystopian science fiction. Yes, Strange Days might be a “bad future,” but its also one in which body enhancements (specifically, a novel method of recording) help reign in police brutality. William Gibson’s novels have a range of enhanced protagonists, some pitiable and others enviable.

This misreading of dystopian sci-fi is almost as great as the misreading of Terminator series — which is about not just the cliched “rise of the machine” but also the way a machine (Ah-nuld in T2) can be programmed by humans to do good — both explicitly and through experience:

The most vis­i­ble sub­plot of T2 was the T-101, por­trayed by Arnold Schwarzeneg­ger, learn­ing to under­stand and defend human­ity. He was repro­grammed away from his mur­der­bot ways to defend John Con­nor, who then com­mands him not to kill peo­ple (an order the Ter­mi­na­tor obeys even at the cost of his own abil­ity to defend Con­nor later in the film). In the very last scene, the killer robot gen­tly wipes away John Connor’s tear and says he finally under­stands why humans cry. If any­thing, Ter­mi­na­tor 2 is an endorse­ment of lethal auton­omy, as it shows that self-learning robots can be taught to under­stand and even defend non-militant humans.

Just because __ genre trope appears in dystopian fiction does not inherently mean it is a bad thing. The authors may render it in a more complex way than most may imagine. Of course, this realization comes from actually engaging with the source material — something that many who cite sci-fi and fantasy for policy are manifestly unwilling to do.

So who knows how the public will react to biohacking and human enhancement? They’ve reacted negatively to some forms of human enhancement (Google Glass) and not to others (for example, anti-depressants and attention medications). That might be because the public does not a uniform opposition to enhancement. Some technologies and tools may be more objectionable to others; in addition to posing privacy concerns the Google Glass looked like what we might stereotypically think is a Gibson-esque console cowboy accessory. In contrast, police bodycams not only serve what people believe to be a socially beneficial purpose (reducing lethal police-citizen interactions) but also are small and unobtrusive.

In talking about the Google Glass and its often acidic public reception, one also wonders about causality in general regarding the relationship between fiction and sentiment that Meyer and others imply. Do the public react negatively to sci-fi like gadgets and gizmos because science fiction is an accurate representation of their anxieties? Or does science fiction condition said anxieties by teaching them to be afraid of certain things but not others?

Second, let us talk a bit about the problems with taking a clustering of science fiction tropes as a valid anticipatory signal about the future. The biggest problem is simply that cliches, tropes, commonalities are endemic to genre fiction. They have a lot to do with the politics of the genre or its fans, a factor overlooked when they are used as exploratory tools.

For example, let’s take Twilight and The Hunger Games. You cannot go into a bookstore and go to the young adult section without being buried by wave after wave of derivative works that transparently rip off the Twilight or Hunger Games formula. Why? Well, getting fiction published to begin with is extremely difficult. Manuscripts that shoot to the top are ones that can make money. And its easy to make money by…..recycling a proven formula. For crying out out, we had a best-selling romance novel series turned major motion picture (50 Shades of Grey) that started out as Twilight fanfiction. Now, that may say a lot of things about the audience for science fiction and fantasy novels but little else.

Another element that Meyer ignores is the powerful and distorting roles of genre cliches and legacy futures. There is, bluntly, nothing new under the sun. Writers sit down at their desks and make fiction strongly influenced by dominating sci-fi tropes. Genre cliches are a thing, and not every part of a sci-fi or fantasy work is an conscious effort to imagine some desired future or conjure up societal currents. Why? It’s hard to write something, and easy to unconsciously be shaped by norms and tropes of the genre. It just feels…..natural to have an angsty boy that is forced to pilot a giant robot in an epic struggle against aliens or other humans, right? Or maybe he has some kind of small, rodent-like creature that shoots lighting and speaks in a pigeon language. The fact that you just wrote another manga series about an shy, awkward boy and a magical, ditsy girlfriend is just inspiration speaking to you, right? Oh, and I totally believe your hyper-masculine crackpot survivalist biker man post-apocalyptic movie about a lone, hyper-masculine warrior fighting a bunch of savage, tribal-like band of hooligans is just completely original! Especially the part about how his only friend is his trusty post-apocalyptic dog. Such a heartbreaking work of staggering genius!

Legacy futures are even more pernicious:

What immediately struck me is that we all have this kind of cognitive “legacy code” in our thinking about the future, not just science fiction writers, and it comes from more than just pop-culture media. We get legacy futures in business from old strategies and plans, legacy futures in politics from old budgets and forecasts, and legacy futures in environmentalism from earlier bits of analysis. Legacy futures are rarely still useful, but have so thoroughly colonized our minds that even new scenarios and futures models may end up making explicit or implicit references to them.

In some respects, the jet pack is the canonical legacy future, especially given how the formulation (originally from Calvin & Hobbes, I believe), of “where’s my jet pack?” has become a widely-used phrase representing disappointment with the future instantiated in the present.

People who follow my Twitter stream may recognize another example of a legacy future: Second Life. While the jet pack never really became part of anything other than Disneyfied visions of Tomorrowland, over the past five years or so Second Life came to represent for professional forecasters and futurists the vision of the Metaverse. Even though Second Life has yet to live up to any of the expectations thrust upon it by people outside of the online game industry, it has doggedly maintained its presence as a legacy future.

Legacy futures are textbook examples of many things, from cultural saturation to the ever present scourge of the availability heuristic. But there is also the issue that the more canonical a given view of the future is, the harder it is to deviate from it. I’ve expanded at this at great length in War on the Rocks and Slate, but the short of it is that groupthink is a thing too. There are waves of hot trends — one minute Johnny Forecaster is saying the future of war lies with some techno buzzword like “dominant battlespace knowledge” and the next he’s saying that its all about counterinsurgency and low-intensity conflict against people hiding in caves. The correspondence between all of this and reality is besides the point. The point is that its cool.

I mean, no one can explain what “dominant battlespace knowledge” or any of the 1990s “future of war” buzzwords really mean but they sound sufficiently badass and intimidating right? Add “complex adaptive” or “cyber” to anything and it sounds sleek, futuristic, and mysterious. When these tends of cliquish tendencies combine with the cliquish nature of the pundit community, the derp is multiplied. The policy wonk obsession with Game of Thrones (like Battlestar Galactica before it) is similar to the anime fanfiction that me and my friends in high school used to write — works designed for the writer’s indulgence or that of the in-group, with a tenuous relationship to the source material. And I’ve seen a lot of Mary Sues that are written better than many “What ___ Explains About ___” pieces.

Predicting the future is hard. First, you have to know something about the science and technology involved. Additionally, you need to understand some basic human science elements like sociology, economics, psychology, etc. It would help a lot if you had a grasp of history as well. And finally, you need to have a gift for creative scenario thinking and a tolerance for unorthodox gambits and “what if” experiments. If you use sci fi or fantasy as a springboard, you need to also have appreciation for context of the work and a way to analyze it that doesn’t just amount to naive content analysis or the literary equivalent of overfitting. The people who have all of that and deliver for their clients justifiably Make the Big Bucks.

To use a sci-fi analogy, in one of the more ridiculous GI Joe seasons, COBRA tried to combine the DNA of history’s greatest military leaders and warriors to create the ultimate commander. However, the Joes thwarted them from getting some key figures (for example, the patient Sun Tzu) that balanced out the others (the arrogant and impulsive Napoleon). So COBRA ended up with an endlessly mockable goon named Serpentor, fond of making grandiose statements always ending with “this, I command!” Setting aside the, um, novel interpretation of biology in that season its a great metaphor for unbalanced futurism. Do it right and futurism can be prescient and illuminating. Get some of those elements unbalanced and…well….you might go around saying “this, I command!” uncontrollably.

But what if you have a Hot Take and you don’t care about any of that stuff? Not to fear. You can just avoid that pesky little bit about actually knowing what you’re futurizing about and just cite your favorite sci-fi flick:

Resorting to sci-fi to justify hating ___ is intended to be allusive: by invoking a widely known pop culture trope about scary ___, the author avoids grappling with the reality of how technology is developing and instead chooses a cheap appeal to fear through flashy imagery.

Though the original piece this is quoted from talks about a very specific set of technologies you could really substitute it for almost anything, from cyber security to robots.

Science fiction and fantasy can be a tool to help with thinking about the future and pressing policy problems. But like Spider-Man’s powers, it also comes with great responsibility. It is easy to get away with sophistic appeals to people’s favorite pop culture items, tropes, and references. It is a lot harder to show people why they should take you seriously when you argue that a cultural artifact can help serve as an useful way of gaming out the future or simply just what is going on today.

I still think it’s worth it, despite the difficulty. But I have to admit my faith in its utility is challenged constantly.

--

--

Adam Elkus

PhD student in Computational Social Science. Fellow at New America Foundation (all content my own). Strategy, simulation, agents. Aspiring cyborg scientist.