Visions of Transhumanisms & Posthumanisms: Which Path is Humanity Headed?

Angjelin Hila
Science and Philosophy
18 min readNov 17, 2020

Where is humanity headed? Will humanity lose its centrality in a posthuman world? Or will the very idea of being human be forfeited for a hybrid world of cohabiting intelligences?

Installation art by David Altjmed

Transmutation, Alchemy & the Homunculus

Transformation has been a subject of civilization since at least the advent of language and belief-systems some 200 000 years ago.

Greek, Roman, Jewish, African, near Eastern and East Asian mythology, among others, abound with beings betwixt human and beast. Fauns, satyrs, centaurs, minotaurs, sirens, sphinxes, Anubis, Horus, Thoth, Golem, Ganesha, nāgi, nure-onna, are some of the more well known in the rich inventory of mythological creatures spawned by world religions and cultures.

Moreover, the desire to transcend our physical limitations is inscribed in many religious doctrines. One of the chief promises of the ancient Chinese Taoist religion and philosophy was immortality. The ancient Greeks imagined their Gods as immortal and more powerful versions of ourselves. Angelic hierarchies in the Abrahamic religions formed part of the Great Chain of Being decreed by God.

The human imagination wells with prospects of transcending our physical limitations, boundary-cases like the above mentioned mythological creatures, and transmutations of basic substances into more desirable ones like gold or the elixir of immortality, as was the aim of the Hermetic and alchemical traditions in the West. Some strands of Western alchemy aimed to create homunculi or artificial humans, which parallels in many ways our contemporary fixation with artificial intelligence.

Toward the Demystification of the World

In the Western world, after the erosion of antiquity, the new overarching worldview of Christianity placed humanity at the centre of creation. It promulgated an identification of human essence with God, who forged us in his image. And essences, qua essences, are perennial.

As this centrality began to be undermined, first with heliocentrism (the Sun at the centre), and later with Darwin’s theory of evolution, our metaphysical binding to a perennial nature infused with logos gradually unraveled. We became unmoored from that ideological fossilization of our defining characteristics: dominion over other species, our ability to reason and forge tools.

The evolutionary science that flowered following Darwin, not without its hick-ups and continued scientific controversies (e.g. social Darwinism, the concept of speciation, blurriness between spandrels and adaptations etc.), placed both reason and our happenstance escape from the shackles of mere nature, which still weighs on other species, back into the realm of the corporeal. The mystery of being was not resolved, but the contingencies of our place in the universe garnered a new context.

We are located in a small planet in just about the sweet spot from the mid-sized star that pulls us in its orbit (known as the habitable zone), and the sheer profusion of life around us, the intricate ecosystem in which we find ourselves, has developed through none other than the same causal sequences that we observe in inanimate nature, under the happenstance aegis of said star that bountifully showers our lucky planet with just the right amount of energy. Except that with life, the processes are so complex and involved, that speciation absorbs the selective pressures of the environment through some form of reproduction to yield the vast phylogenetic tree of life we’ve so painstakingly anthologized.

Of course, we cannot yet explain how life began to begin with (though we have some guesses like meteors carrying the primordial soup that finds propitious conditions on earth), nor can we properly define the boundary between life and non-life. Our categories that work so well in ordinary instances almost always begin to breakdown near the boundary, say between what constitute organic and inorganic compounds. Nor do we truly understand causality (nor for that matter how to define the physical), though I don’t say this to invoke some God-of-the-gaps type argument, or find some wedge into the immaterial. I say it with utmost concession to the limits of rational analysis, which compels me, for whatever reason, to probe any idea or theory to the edge of understanding.

For one, the distinction between organic and inorganic compounds is a vestige of the doctrine of vitalism, which held that living organisms possess a “vital force” that accounts for their being alive and is otherwise absent in inanimate matter. Today, this distinction persists on purely conventional, if somewhat pragmatic, grounds. Organic compounds merely define as compounds containing carbon-hydrogen bonds, though we omit certain compounds that meet that condition, like carbon anion and cyanine salts. Which is to say that, while the preponderance of organic compounds coincide with life, this preponderance has a purely functional or structural explanation.

Similarly, the distinction between life and non-life, historically understood as an insuperable chasm of vastly different classes of entities, has also blurred. We can offer necessary and sufficient conditions for what constitutes life, yet in part, these conditions are bound to be ad-hoc. Near the boundary, things get blurry and unclear. This hearkens back to the question: how did chemical compounds spontaneously assemble to snowball to life? The purported process by which organic compounds self-assembled and self-replicated into living organisms is called abiogenesis. A satisfactory explanation eludes present science, though candidate hypotheses like RNA world precursors to life remain popular and plausible.

Understanding this cleavage in precise terms would sew back the seams of the inanimate and the animate realms and make good our contemporary presumptions about the unity of science and the completeness of physical explanations. Our understanding of these boundaries, therefore, presents prospects for unleashing new and unprecedented powers of human engineering. We already see this in our contemporary moment and the discussion fomenting in the past two decades around transhumanism and posthumanism.

Installation art by David Altjmed

Transhuman or Posthuman?

The convergence of several independent strands of engineering advances could precipitate either a transhuman, posthuman or a future configuration accommodating both, depending on how strictly we define these terms.

But first, what is meant by transhuman and posthuman?

Transhumanism refers to the belief that humanity can evolve beyond its current mental and physical limitations by means of science and technology.

Posthumanism, on the other hand, comprises a varied set of interrelated theses ranging from extending ethical concern outside the human realm to the view that competing intelligences, such as AI, will either subjugate or lead humans to extinction. I will entertain the latter idea, namely the possibility that humanity if the category retains its meaning in light of transhumanism, will eventually cede control of nature to another intelligent actor, the most obvious candidate being artificial general intelligence.

This state of affairs is certainly not impossible to imagine. In our contemporary moment, humanity exercises a significantly greater degree of control over resources, energy, and production than any preceding historical epoch. Our activity increasingly decides the fate of other species, surrounding ecologies, and the distribution of the earth’s resources. At the same time, our ability to exploit the environment for our benefit has converged with some degree of self-consciousness about our own activity: on the one hand, preservation of the environment directly correlates to our long-term survival, and on the other, our destructive effects on the environment (resulting in the erosion of ecosystems and habitats, the endangerment and extinction of many species, as well as environmental pollution and depletion of natural resources) might signal the violation of some intrinsic ethical boundary that we ought to, perhaps, steadfastly observe. Which is to say that, absent the effects on our own future survival, it might also be intrinsically desirable to preserve the variation and multifariousness of life around us.

Yet, conceivably, the tables could turn in this configuration and give way to a human creation that outmatches our capabilities for understanding the world, concerted collective action, and exertion of technological power. To understand this better, we can take as an analogy the relationship between humans and dogs or pets in general. Humans dominate dogs in the sense that dogs cannot one day decide to change the rules of the house or start ordering their masters around. The relationship may be mutually beneficial at some level, but there’s a clear boundary between master and servant/subordinate. Analogously, our relationship to artificial general intelligence could very well be similar to the relationship dogs have with us. Imagine the class divisions in today’s society exacerbated by several orders of magnitude, where humans essentially function to serve the needs of their AI masters, if they need us at all.

I’ve entertained some wildly diverging possibilities. On the one hand, we have our contemporaneous issues with potentially unbridled human engineering power, and on the other, the conceivability that this engineering power at its apogee will yield a state of affairs that will eclipse our centrality in the grand scheme.

But what are the concrete trends today that could exacerbate either a transhuman or a posthuman future?

Synthetic Biology, Nanotechnology & Artificial Intelligence

Advances in genetic engineering in tandem with the burgeoning multidisciplinary field of synthetic biology present prospects for not just altering the blueprint of the ecosystem around us, but manipulating organic compounds to generate new biotic components and biomes.

Synthetic biology refers to “a multidisciplinary area of research that seeks to create new biological parts, devices, and systems, or to redesign systems that are already found in nature.” (Wikipedia).

In addition, multidisciplinary attempts to harness electrical and computer engineering to extend and enhance the human soma provide a parallel and potentially coevolutionary frontier in the spectrum of engineering: the synthesis of mechanical and electrical with biological engineering.

While the current field of biomechanical engineering seeks to apply the principles of mechanical engineering to biological systems, it is not unlikely that some composite engineering discipline will emerge that does not easily distinguish between mechanical components from biological ones, given that the present scientific consensus does not make a principled distinction between the two. The only principled distinction concerns their origins: the latter consists of structures sculpted by natural selection (and extra-selective factors like genetic drift), whereas the former structures forged by cultural selection and evolution.

At bottom, both of these domains of design consist of a vast multitude of systemic deployments of automata that fundamentally reduce to non-organic components i.e combinations of simpler physical elements.

In contrast to biological systems, electrical-mechanical systems are still comparatively coarse and simplistic in the number of processes and the resolution of scale in which those processes are manipulated.

Yet, it is safe to assume, advancements in nanotechnology, the use of matter at the atomic, molecular, and supramolecular scale for industrial purposes, will eventually match the modulatory subtlety of organic processes.

Because biological systems are survival machines, they instantiate certain large-scale variables in ways that mechanical systems at present cannot (or at least cannot as well). Among these, two that capture this difference are antifragility, a term coined by the economist Nassim Taleb, and replexity value, coined by the biologist George Church. Antifragility and replexity, while conceived in different contexts and deployed in separate domains of validity, are deeply interrelated.

Antifragility denotes the property of systems that increase in ability to thrive as a result of stressors, shocks, volatility, noise, faults, attacks, and failures. Taleb employed the concept in economics, but paradigm cases of antifragility also include features of living systems like the immune system or processes like hormesis, whereby the organism exhibits a biphasic response to exposure to substances.

Replexity, short for replicated complexity, denotes the property of a complex system to replicate or self-generate. Replexity can be a property of a single organism, populations, and/or whole ecologies. The property’s scope of validity is not confined to a particular unit of analysis; rather, it takes within its referential/extensional space any structures that are able to reproduce and/or self-replicate. Complexity, in this case, can be defined in terms of the degree of entropy in a system, which in turn can be formalized as a physical or informational property (inclusive or here, where each of these would have different mathematical meanings) of cybernetic systems.

Both concepts imply a degree of identity preservation and rely on the presence of feedback mechanisms. There’s, however, some ambiguity in Church’s deployment of the concept. Church appears to mean by it any set of processes that replicate their identity conditions i.e. a physical pattern with sufficient fidelity, but does not specify the unit of replication. DNA of course copies itself as do whole autocatalytic sets, but these are arguably dependent on an autopoietic unit, which refers to an organic unit whose internal dynamics reproduce its boundary conditions such as, minimally, a cell.

This ambiguity is relevant to the point that feedback is endogenous to systems that exhibit both high replexity value and antifragility. Only autopoietic units or collectives composed of autopoietic units incorporate feedback and instantiate all the necessary and sufficient conditions for antifragility. Which is to say that they can preserve themselves against variable and potentially adverse environmental conditions.

A system cannot exhibit high replexity value without instantiating antifragility. This means that antifragility, a property of systems that incorporate feedback or maintain a relatively invariable internal pattern against external patterns, is necessary (while not sufficient, depending how technically we define the term) for replexity. We have clarified that replexity includes both metabolic processes and reproduction, the transference of genetic material to offspring.

These properties, if we confine the discussion for the moment to them, are indicative of the type of structure that instantiates them to begin with. By this I mean that living structures are products of a historicity of interactions that have, through a combination of chance and adaptation, forged certain toolkits conducive to self-preservation. The constrains of self-preservation are such that the toolkit must be sufficiently general. The generalizability of the toolkit, however, is nonetheless confined and therefore specialized to certain average environmental conditions. While general and specialized are relative terms, it is easy to see that living organisms are generalized when compared to technological designs.

Mere unicellular organisms like bacteria for example have adapted to pursue nutrients and avoid toxins in their environment. Contrast that with your smartphone, which has sensors like an accelerometer or gyroscope, but has no ability to react to its environment. Your smartphone’s accelerometer can convert vibration or movement into a precise numerical quantity, something bacteria cannot do. But bacteria can harness energy independently from the environment to be able to survive and reproduce, a capability that classes them as heterotrophs (which is what we are as well), while your smartphone requires a human to recharge its battery every time it dies. Not to mention that your smartphone does not have an internal solution to senescence — defined broadly to mean any entropic activity that could jeopardize its integrity — namely the ability to reproduce by division or meiosis, the two strategies found in nature.

In other words, the modulatory capabilities of survival machines necessitate a wide cast of parallel interface systems with its environment to conserve identity conditions. On the other hand, mechanical systems boast advantages that the wide cast of selective thresholds of biological systems does not permit: namely a degree of specialization of operations with efficiency and accuracy that vastly exceeds anything a biological system alone can muster.

While we have mapped the entire human genome, we have not yet mapped the mechanome, namely the mechanical environment of living cells or organisms beyond genes and chromosomes. Without saying anything about the gulf between sequencing a whole genome and actually understanding what the genome does, codifying the mechanome would amount to understanding the entire spectrum of mechanical interactions in an organism’s life-cycle by enlisting interdisciplinary insights from biomechanics and mechanobiology. The challenge in mapping the mechanome is that the mechanical evolution of an organism is highly contingent and adaptive, and, much like the genome is unique to every individual organism, the mechanome evolves in some respects idiosyncratically for every organism.

To understand this better, consider the idea of a minimal cell. Such a cell would be engineered from the bottom-up with basic organic components. So far this has yet to be achieved, but something close to it has: the insertion of a wholly synthetic genome inside an emptied host cell with membrane and cytoplasmic components. The cell, known as Mycoplasma Laboratorium, was successfully under the control of its synthetic genome and was able to replicate. This top-down approach, however, has limitations precisely because of mechanomic uncertainty: the molecular composition of the host cell is not fully understood. The bottom-up engineering of an entirely synthetic cell de novo would set synthetic biology on the path of convergence with mechanical engineering.

The impending convergence of the dual manipulation of the generality of living organisms and the specificity of mechanical systems could mean the obliteration of the functional trade-offs I outlined earlier.

Which brings us to this point: a complete understanding of the mechanomic complexity of biological systems in combination with the computational power of nanotechnology and (quantum computing) could yield human variants that optimally leverage the transformative capabilities of these engineering domains. Though far into the future, it looks rather probable, all else being equal (namely, avoiding civilizational collapse one way or another).

But there are several variables and possibilities worth considering. One is the potential that humans and AI will diverge in their evolution, and the second is that they will coevolve. (I employ the term evolve broadly to include intentional design.) It seems unlikely that AI will not incorporate the discoveries and advantages of synthetic biology into its design, and at the same time that humans will not augment themselves to adopt some of the physical and cognitive benefits of AI and nanotechnology.

Where are we at present?

The discovery of the structure of the chief mechanism of heritability in 1951, the DNA molecule and attendant processes of protein synthesis, transcription, and translation, unlocked new engineering potential in the organic world. Until then, genetic modifications were confined to selective breeding, such as displaying a preference for cows that produce a lot of milk and wheat that produces a lot of grain.

With an understanding of the fine-grained (pun intended) mechanisms involved in the heritability of traits, came the prospect of interfering in the genetic codes of various organisms to produce genetically modified organisms (GMO)s for human benefit.

Until recently, the collection of technologies involved in genetic engineering relied greatly on trial and error, in addition to being slow and high-cost. One of these methods involves isolating genes through restriction enzymes and pasting them elsewhere in the genome through ligase enzymes that bind DNA strands together to create recombinant DNA. This and other methods however relied on a low-resolution targeting method known as homologous recombination, which involves exchanging homologous or similar nucleotide sequences in hopes that the desired location is targeted.

More recently, the discovery of the highly accurate and efficient CRISPR-Cas9 system has also enabled the targeting of non-homologous DNA sequences. CRISPR-Cas9 is a bacterial antiviral defense system, which can be employed for highly accurate editing of DNA sequences by deploying the Cas9 enzyme with a set of synthetic components to guide its activity in order to cleave phosphodiester bonds within nucleotides.

The development of CRISPR-Cas9 and strides in synthetic biology outlined earlier are but one prong in a possible transhuman/posthuman future. The frontier of artificial intelligence forms the other prong.

A great deal of AI progress in the last five years has been in the adoption of machine learning capabilities within industry. Some of these include speech recognition, autonomous cars and systems, electronic discovery in law, drug creation and analysis/classification in health, predictive processing in finance, natural language processing in cybersecurity, face recognition and threat detection in government and military, to name just a few. While machine learning has gotten more sophisticated, hard-AI, also known as Artificial General Intelligence, eludes the field. This is in part because we do not fully understand brain structures and functioning, nor how the brain gives rise to mental phenomena like consciousness. However, the prevailing consensus is that present obstacles in naturalizing mentality and consciousness are not due to intrinsic limitations in human intelligence but a lack of understanding of the neural mechanisms that generate mental phenomena, and that these should be discoverable through a triangulation of methods both empirical and introspective.

The prospect of realizing AGI has prompted speculation about superintelligences and the AI control problem (famously by Nick Bostrom), the challenge of creating a superintelligence that will not harm humans. The idea that we will develop such superintelligences without first augmenting ourselves appears to me to be unlikely. If we achieve preliminary AGI, I would suspect that it would not be superintelligent yet. There are several ways to explore this. If we host the AGI in a body/soma akin to our own, then this race of beings could enter into a power struggle with us. If we host the AGI in a distributed way, wherein its capacity to manipulate physical systems is dependent on such a distributed physical system, then its potential to overpower us may be weaker. It could also be stronger if human capabilities to disarm the system do not outstrip the system’s ability to bring entire human collectives under its control. This latter alternative seems less likely.

At present the wielding of AI, primarily in the form of machine learning, is neither entirely beneficial nor detrimental to human collectives. This is because some of the reasons for the deployment of AI, like automation, mostly benefit industry, while creating an atomized and dependent general populace. Nonetheless, its deployment is decidedly within human control: present AI remains an extension of human plans and goals. If AI advances where it creates a sense of independence from human beings, before that bifurcation takes place or due to that bifurcation, human beings will likely augment themselves with AI extensions. There’s a possibility I alluded to earlier that once we learn to manipulate the germline with sufficient sophistication and conceive entirely new organisms from the bottom-up via synthetic biology, we will, along with nanobiotechnology capabilities, endow our genomes and new organisms with superintelligence as well.

Prohibitions

In Regenesis: How Synthetic Biology Will Reinvent Nature and Ourselves, George Church argues that prohibitions on technologies have been historically counterproductive and ineffective. He notes, for example, that despite 79 nations coming together in 1972 to ban biological weapons in the biological weapons convention, 25 years later more than twice as many countries had developed or were developing biological weapons since the treaty was signed. Looking at other examples of prohibitions like alcohol and drugs reveals that not only do they not stop their usage, but are counterproductive because they incentivize the creation of black markets and unintended health crises.

Despite the possible lack of staying power of legal prohibitions, legal regulations form the chief method through which technological activity can be redirected or culled to prevent societal harm. One of the defining issues of our time is the potential for human germline editing. On the one hand, editing the human genome can eliminate diseases and abnormalities, while on the other, it can drastically exacerbate inequalities by precipitating a race for the preferential selection of highly desirable traits. If historical examples are any indication, the more comfortable we get with these technological capabilities, and the more the consumer market makes genetic manipulations affordable for the lay person, the less stringent our prohibitions will get. Some have argued that variation in genetic preferences will always reassert itself one way or another because once we homogenize the species with tall, smart, and good looking people, incentive for differentiation will increase.

I wish to introduce to the reader the idea of the deontic modalities, which are normative philosophical concepts that exhibit a 1–1 mapping to the alethic modalities. The alethic modalities concern descriptive modal statements, and are the following: necessity, possibility, and impossibility. Analogously, the deontic modalities concern what ought to be the case, and are the following (in adverb form): obligatory, permissible, and prohibited. The landscape of human activity filters through their prism. Some things are obligatory, others permissible, and yet some prohibited. A key difference between the deontic modalities and the assertoric ones is that the latter do not admit of exceptions, while the former, existing in the highly malleable realm of social reality, exhibit degrees of fluidity because their separation relies on a spectrum of enforcement measures. Measures of enforcement span unwritten rules, informal negative sanctioning, and explicit rules enforceable by law.

The overall shape of the present deontic landscape is heavily indebted to our deep evolutionary inheritance. Evolution by natural selection, observed from a wide lens, reveals to be a highly conservative process. Part of the reason for this is logistical: the space of what works is infinitesimally smaller than the space of what doesn’t against the conditions of nature. There’s a potentially plausible sociobiological story here: namely that our collective fears and prohibitions reflect, in part, this evolutionary conservatism. Of course, we must correct for the variation exhibited across cultures in the distribution of levels of permissibility. The claim is that there’s sufficient overlap in cultural prohibitions that lends warrant, among other forms of evidence, to a biological explanation. Prohibitions against germ-line editing and the creation of the homunculus are not entirely rooted in superstition, but fears of the mis-wielding of human power. In Chaucer’s Canterbury Tales, in The Miller’s Prologue and Tale, the miller famously admonishes that:Men sholde nat knowe of Goddes pryvetee.”

And yet, the more or less unabated scientific progress since the Enlightenment, has reversed that attitude toward nature. The trend, at least since then, has been to leave no stone unturned in our attempt to index and probe nature’s secrets. With this comes a strong correlation with technological wielding. If we develop the capabilities, we will create the homunculus (euphemism for AGI) and we will bring the blueprint of biology and greater nature, including star systems, within our control. The deontic modalities will evolve alongside these capabilities. The guiding constraint, as I see it at least on paper, will be to cause as little harm as possible along the way. But this constraint too will be overstepped, just as we see it in our contemporary moment, in hopes that eventually, we will retroactively reconfigure the system to be more felicitous. At present, we are at some new transition stage that is undoubtedly causing, to say nothing of the greater ecosystem, a great deal of human suffering.

The passage to a trans-post- human future will likely be messy and complicated. What remains to be seen is if our engineering capabilities will reach levels where we give up the body as it’s biologically configured. Given that our present form has been hewn through millions of years of evolution, it’s unlikely we will do this anytime soon, despite Ray Kurzweil’s law of accelerating returns. But will synthetic biology and nanobiotechnology yield bodies that are orders of magnitude more powerful, both in their ability to avoid disease, repair themselves and capacity for information processing? Undoubtedly yes.

Installation art by David Altjmed

--

--

Angjelin Hila
Science and Philosophy

PhD Student. BA, MI, University of Toronto, focus on data analytics. Passionate about computer science, physics, philosophy, and visual arts. angjelinhila.com