Interview Fall 2015: Human Augmentation and Transhumanism

Earlier this year, we published the first Interview on Kids and the IoT. Below is a re-publishing of an another in an occasional series of interviews with colleagues and peers originally published on the Changeist website, but now migrated here for a wider audience. It’s also to let some voices interact in a public forum in the sort of way they might on a podcast, but on screen. These conversations take place asynchronously, over time and across shared space. While they can be longer-form that a typical Q&A, to us they’re worth the effort. That also means restricting them to twice a year.

The topic of transhumanism has been a hot one lately, for reasons that probably stretch from recent surges in bio- and physical computing to questions of economic and political equity that come out of the other end of a global recession. We’re very fortunate that two people with provocative viewpoints agreed to take part: writer, researcher and critic Paul Graham Raven and researcher, writer and anthropologist Lydia Nicholas. And again, Emma Charleston has provided us with a lead-off illustration. Thanks to all.

As always, the opinions are those of the respective respondents.

This Interview was originally published in November 2015.

Scott Smith: Transhumanism, human modification and augmentation, and cyborgology are among some fairly contentious topics discussed amongst futurists, technologists, and now even politicians. The public narrative is a mix of optimism, boosterism, skepticism and even fear. Given the strong association of transhumanism in particular with a handful of individuals like Ray Kurzweil, one can see how this polarization has emerged. For better or worse, in the public view, these are topics that are closely associated by the layperson with futures research, so it seems like they’re worth picking apart a bit.

Paul, you’ve written on this topic recently for the Long + Short, reviewed other’s’ texts on the topic, and even done a stand-up comedy routine on the topic earlier this year in London (full disclosure: I clapped and whistled from the cheap seats). You take a pretty sceptical view of both the field and its proponents. Why so critical?

Paul Graham Raven: To be plain, I suspect it of being a Trojan Horse, but that needs qualifying somewhat. The vast majority of folk who might identify as “transhumanist” have engaged with that movement’s ideas predominantly through pop-cultural reproductions thereof. It’s pretty easy getting people on board with a promise of consequence-free designer drug use a la the late Iain M Banks’s “Culture” novels, or the idea that it might be pretty sweet to have your cellphone somehow embedded inside yourself, but the enthusiasm resembles that found in any other fandom, as does a cheery refusal to examine the more obvious gotchas. Why spoil a good story, right? For these folk, transhumanism is much like steampunk is to steampunks: it’s a great way to meet new people and dress up all awesome, but there’s little interest in advancing the agenda of cummerbunds and clockwork applique to the highest strata of the socioeconomic superstructure.

But the architects of transhumanism as a movement, are a rather different matter — they are smart, well connected, often well funded, and they spend a lot of time crafting their rhetoric; their closeness to actually-existing research institutions and think-tanks must make them very aware of both the limited practical utility of most actual research advances, and the ease with which such limited advances can nonetheless be packaged as seductive vapourware products, or as sure-fire policy prescriptions; their writings, online and off, are unashamedly celebratory of the Ayn Rand model of right-libertarian politics and free-market fundamentalism; and the entire conceptual edifice of “life extension research”, which is so central to the transhumanist political project, is quite clearly a category of special pleading designed to accommodate only those who already have access to the most advanced medical affordances available on the planet.

These factors and many more lead me to conclude that “capital-T” Transhumanism comprises an uncertain ratio of snake-oil mountebanks and self-deluded evangelists of technoscience, and that the presence of even a few of the former renders them extremely dangerous, given their proximity to the Silicon Valley cash’n’power nexus (and the favour of many tech biz bigwigs). At present, transhumanism is essentially one of the more successful and pop-culturally persistent masks worn by our old friend the Californian Ideology; but the academic Dale Carrico puts it perfectly, I think, when he says that “transhumanism wants to be Scientology when it grows up”.

Smith: This seems to pertain to what one might qualitatively called “full transhumanism,” with both hard- and wetware augmentation, cognitive upgrades, life extension etc. Where is the line of acceptability, then? How much upgrading is just enough?

PGR: To be honest, I think a lot of full transhumanism’s promises, while technically feasible, are so far off as to be effectively irrelevant; they’re wild extrapolations of very incremental steps, and those incremental steps are spread across many different disciplines and sectors (many of which, ironically, are all but illegible to card-carrying transhumanists for ideological reasons). But that’s the beauty of having it as the driving force of a cultural movement: your mouth can write technological cheques you’ll never be called upon to cash, even as you solicit actual cheques from those who are most enthusiastic about “immamentising” your preferred eschaton-in-trade. The acceptability of actually-existing (or at least actually-might-possibly-exist-soon) upgrades is a question that, despite appearances, Movement Transhumanism has an active interest in avoiding; a focus on the technical plausibility or sociopolitical impact of their utopian memeplex would be extremely damaging to it.

“I think a lot of full transhumanism’s promises, while technically feasible, are so far off as to be effectively irrelevant; they’re wild extrapolations of very incremental steps.”

But to return to the very real question of what is appropriate — because while the Full Transhumanism narrative is just that, a narrative, it is nonetheless extrapolating from (mostly) real medical and technical paradigms — this is where the real battle lies, and it’s much more important than the battle to discredit the immortalists and brain-freezers (who will likely discredit themselves with time, if time doesn’t do it for them). The problem lurks within the word “upgrade” (and in “augment”, “improve”, “extend”, etc), because it suggests an established baseline of functionality which is not reflected by the actual bell curve of human abilities; put simply, there is no “normal”. This is what worries the bioethics people; the problem isn’t the lack of an established baseline, but the consequences of trying to establish one.

From a leftist position, the worry is that you end up concretising a class system based on physical and mental ability by creating a market for superior function, compounding the existing problem wherein the better off already have far better “health outcomes” (to use the currently popular but deeply obfuscatory term) than the poor. (This would be something of a problem for the right, too, because by quantifying and baselining human abilities, the comparative disadvantages of being differently able would become legible to neoliberal policy mechanisms in a way that would make it very hard to ignore.

“it’s not that extending our abilities through technological means is inherently bad, it’s that ability extension is already very unevenly distributed.”

There’s a pretty good proxy for this problem in the form of the use of nootropics and “smart drugs” in the UK HE sector, and it shows us that what happens is an arms race: as more people take performance enhancers and extend their abilities, the more pressure is put upon those in the now-stretched lower half of the bell curve to cave in and take the pills, too. But this problem only becomes apparent when you take a collective/social look at the issues in the context of the externalities. From the individualist/markets perspective, by contrast, the rationale is clear and simple: if you can afford the pills and want to succeed, you take the pills. Everyone else can make their own decisions.

As such, I’m not sure we’d ever be able to agree on an appropriate or “just enough” level of augmentation, because augmentation is by definition an advantage, but an advantage to which everyone has access is no advantage at all. But this is perhaps less a matter of ability and augmentation per se than it is an issue of a world whose primary social logic is competition; it’s not that extending our abilities through technological means is inherently bad, it’s that ability extension is already very unevenly distributed.

The question changes somewhat if we think instead about very task- or environment-specific augmentations, though — the sort of things that, while conferring a massive advantage in some specific sphere of action, end up disadvantaging you in others. By way of example, it might be feasible to modify a baseline human for an aquatic (or at least amphibian) lifestyle. This, or so I presume, would incite far less jealousy than, for instance, a simple “twenty years more life” augmentation would, because there’s a clear sense of pay-offs taking place: OK, so this guy can swim with the dolphins and land some sweet research gigs that you couldn’t, but at the same time he can’t go more than a mile from the coast, and people stare at his gills when he dines out in restaurants.

This, if I’ve understood it properly, is the distinction between transhuman and posthuman: the transhuman is the human transcended and the human transcending, a paradoxical figure which assumes the human is something to be risen above (there are very definite shades of the nastier readings of Nietzsche going on here); the posthuman, meanwhile, is the collapse of the utility of the term “human,” a recognition that there’s no point in trying to “transcend human ability” because the category “human” is so broad as to be undefinable in terms of ability. So perhaps we end up with a more Kim Stanley Robinson sort of set-up, where we undergo a sort of speciation-by-augmentations in accordance with what we do and where we do it, rather than some sort of Amazon-for-augments where we define our identities by the extensions we choose for ourselves, just as we now do with clothes and other consumer gimcrackery. (Given the current trajectory of the species, I have to admit the latter scenario looks more likely.)

So I guess the current vernacular answer to “how much upgrading is just enough?” would be something like “enough that it makes a difference to your performance, but not so much that anyone will notice anything out of the ordinary”. A more permanent answer — or the start of one, at least — might be something like “enough that you can do most of the things that most people can do”… but a more thorough answer will involve a thorough addressing of the distinction between needing and wanting, and that’s a distinction that capital doesn’t care much for.

Smith: OK, let’s look at this from an applied perspective, such as within allied fields of health and medicine. Lydia, you look at issues around health and the capabilities of science, what’s the view from there?

Lydia Nicholas: Since Paul’s already tackled the issue of negotiating norms, assumed normals, blurred baselines, and the rest so well, I’ll spend a while poking at what’s going on beyond the physical shape of the body — avoiding the carbon fiber, semi-sentient limb that helps you fuck, boulder and write code harder, faster, better, the phone under your skin, the eye that sees around corners and all.

The full transhumanist narrative left physical form a long while ago — if the body’s been more than incidental to the West since Protestants turned up and smashed up all the pretty and the smells and shoved our faces into the direct, pure, unfiltered, untranslated word of God straight from mountain streams. In the beginning was the word.

When you’re learning to code, and writing code, and elbow-deep in computer grease, knowledge quite straightforwardly equals power. I understand it; I write it; I make it happen; I improve it; I alter it; I control it. Simples. The connection becomes even stronger and clearer (mineral water becoming a theme here) if you’re part of a group whose filters align with those put on most of the information we use and act on in the world. For instance, got a white male body? Great — the lab mice are all male, the clinical trial test cohorts are mostly white men, because that’ll stand for a baseline body. Women and members of ethnic minorities report more side effects from more medical treatments, but that’s not visible if they’re not listened to, so we’re all good. Our studies are more easily compared and replicated this way, other(ed) bodies are more expensive.

Quantified-self practices, genetic tests, microbiome tests, tracking of food, movement, sleep, sex, heart rate, blood pressure, body fat percentage; it all produces numbers, it looks a lot like a crackable code; a body knowable and thus a body controllable. I did some work interviewing people who engages in tracking activities, and a repeated motif was the search for the baseline, seeking indicators of decline, of controllable factors; “How does my diet affect my Github commits, my run time, my sleep, my mood?” The body becomes a programme. Programmes don’t age or die — or they shouldn’t, so long as they are understood and patched and fed the right inputs.

“How much upgrading of my body is possible or necessary? Where does my body begin and end. Where does broken and fixed and upgraded begin and end?”

At the same time I distribute not only my memory but my understanding across systems: I don’t just use maps, I have an app and algorithm read maps for me, remember and interpret the preferences I have previously expressed, make decisions, for me. I don’t just have a photo album, I have a filtered list of highlighted “moments” sorted and automatically tagged (admittedly mostly, incorrectly, as dogs). I don’t even remember setting these preferences. The distribution of self across smart objects and systems in more and more intimate, entangled ways — to be sure, to be human has always been to be distributed across objects and systems; what’s new is how much intelligence and agency those things now have.

I suppose what I’m saying is that a person’s body includes their social and cultural, fiscal, material capital; the apps and people and tools they can access. The fleshy part is often a weak link. How much upgrading of my body is possible or necessary? Where does my body begin and end. Where does broken and fixed and upgraded begin and end?

Sergey Brin’s wetware has a 50% chance of developing Parkinson’s relatively early in life because his genetics carry an unfortunate quirk. We only know because of recent new knowledge about genetics and the statistical effects of a few specific ones. His body has grown from a flesh thing that touches duvet covers and drinks bone broth into a line of numbers, statistical possibilities. How much upgrading is possible or necessary? What numbers are acceptable?

Before you display a single symptom, with a 50% chance you never will, based on a series of statistical analyses of an often misunderstood science, start a multimillion dollar genetics business, build up a database of tens of thousands of people’s most intimate personal data and deploy machine learning on it. (of course his ex-wife physically did it, but something something social capital extension of self). Code doesn’t die; the words, programmes, doctrines that I understand don’t break. It’s not ever an upgrade, it’s a realisation of what your body is supposed to be.

I think this is why lost and broken phones can feel like they physically hurt, our sense of entitlement regarding apps’ capabilities bulges out faster than an enthusiastic Tetsuo because we’re remapping our bodies and minds on to these things. Forced to use a very slightly simpler map outside of big cities and I genuinely feel disorientated and enraged. I know I shouldn’t. But I do. How much upgrade is possible, necessary? It’s not a sodding upgrade, it’s my mind, my self, my capabilities. it’s supposed to be this way, why isn’t it working? Argh.

Smith: Who gets left out, then? What does transhumanism or augmentation look like for the rest of us? Does this really turn into a race of the super-extended and everyone else, or is it just another fantasy vs. moral panic scenario?

Nicholas: It’s interesting to think what we’re augmenting ourselves for. What system of values are we trying to build up on. It’s useless thinking of how to keep up or win the race when the track follows a path laid out by someone else for their benefit.

What values are we trying to augment our way to maxing out? Do we want to be harder, faster, stronger, better? Why can’t we want to be more empathetic, more creative and receptive? Maybe we could augment ourselves to be more loving, more settled, more connected and content?

Unfortunately I can’t see many routes towards a world where we’re desperately fighting to be more empathetic and open to beauty, rather than smarter and faster and better at gauging risk as relates to capital. Mostly we want to survive in a world designed by people with a lot more power and resource.

There’s the debate as to whether kids today really have this explosion of ADHD because of something new in their environment, or because our UK/US schools are horrendously broken, and the reaction of twitching and attempting to escape either literally or through games and naughtiness, is a pretty sensible one. More and more, people seem to have little option but to play the game- medicate their kids, hope they can adapt to the school system enough to get the skills and qualifications they need to survive. They augment as survival strategy.

So imagine we need to run just to keep up, we need to augment. It never feels like augmenting, like superhumanness, because we’re doing what we need to do to survive. Sure, I can stay awake four nights in a row now, but it’s just what my boss expects! I can think at twice the speed, but that’s not enough to get past middle management.

“What values are we trying to augment our way to maxing out? Do we want to be harder, faster, stronger, better? Why can’t we want to be more empathetic, more creative and receptive? Maybe we could augment ourselves to be more loving, more settled, more connected and content?”

Then you wonder what trades we might be willing to make to take short-cuts. If I need that augmentation to get that job, and I can’t pay up front, will I take one supported by adverts? We already make so many of these trades, why not have an advert-supported eyeball which lets us see the data trails we need to, but which rewrites graffiti into advertising slogans, until the walls crawl with cheery reviews and prompts to come inside for 10% off.

I heard a woman from the Singularity University speak recently about injecting liquid computers into our brains — computers made of DNA and other proteins folded into logic gates and fluid chips. She spoke about some of us being able to access these upgrades. Some. The hint of an uncrossable divide. But I don’t pay for maps, for email, for instant communication on (as of this minute) nine chat apps on my phone. Why would I pay to get a brain upgrade? I just don’t trust myself not to make terrible tradeoff in the process.