A Dying Man Told Us the Future of Artificial Intelligence

Caspar Mahoney
ILLUMINATION
Published in
11 min readMar 28, 2023
Photo by Maxime Lecomte: https://www.pexels.com/photo/cyborg-with-blue-lights-13471114/

In the cold March days of 1974 an aggressive colon cancer had nearly completed its work upon Ernest Becker, siphoning off the last heroic intent from his mortal coil. Yet whilst Becker died that year, he had already escaped death’s clutches by creating one of the greatest works of the 20th century.

This book was called “The Denial of Death” and it offers us profound insight into the future of man and his creations. Especially — as we shall read — Artificial Intelligence.

Becker was a thickly mustached man of Jewish descent. As a professor, he taught anthropology at Upstate Medical College in Syracuse New York, then Berkeley (California), and later Simon Fraser University in British Columbia. Yet perhaps differently to many academics, he had fought for the US army, and helped liberate a Nazi concentration camp in the dying days of WWII.

To understand what was extraordinary about the intellect of the man that survived WWII, you can hear the words of Becker himself, from this interview recorded at his hospital bed in 1974.

Before I continue, there is one specific piece I would like you to hear, which is central to the theme of the rest of this article;

Audio clip of Becker on power and character.

As you can hear in that recording, Becker’s concept was that humans have a fundamental schism at play around power; that of our godlike powers and our creatureliness (vulnerable/powerless), and that they use a form of ‘plugging in’ to manage the fear this creates. To plug in is to gain power through something other than oneself, be that thing a person or a social/cultural mission.

Although it is a simplification, we can think of the schism as mind and body.

The mind is our human side which is godlike, and rebels against the creatureliness of our body.

Why does this schism play out so vitally? Because you have a brain that can imagine incredible new possibilities, can conceptualize far into the future, can connect abstract thoughts and patterns. Yet you also shit, have sex, and turn to dust and dirt — just like any other animal.

The burden of knowing we are a mere animal in a dying body, yet can think of things, lives, worlds and distant futures is the cause of our psychology and of our dysfunction.

The first five years of life are understood to be exceptionally difficult. Infants are in restless conflict, seeking a reconciliation between the two sides. If the reconciliation fails, then this is the foundation of neuroses in that person. Schizophrenia, which is a confusion of personal identity, is a classic example.

Whilst the reconciliation can mean that there is ‘less’ evident neuroses, we should note that Becker believed all humans have neuroses. All are ‘mentally unwell’; society itself is therefore ‘mentally unwell’.

For this article, allow that some types of neuroses have come to be labelled and are more relatable, than others, as a cultural phenomenon, not a judging one on behalf of the author.

To put it another way, a schizophrenic is mentally ill to outside observers, but a schizophrenic would probably also believe those around them are sick.

Turning back to reconciliation, this typically takes place through what Becker and the psychology community calls ‘transference’. Transference is to ‘plug in’ or hang off another’s godlike position.

In many families, this transference target is the father, mother or a replacement figure (e.g. an adoptive parent or close relative). This then goes into Freudian analysis of parental-based neuroses, but it also explains why solid parental foundations are understood to be so critical to psychological stability and ‘resilience’, insofar as the outer societal culture can observe.

Becker shows that we all carry the burden of the body-mind dual, which ultimately manifests in a particular shape: fear of death.

Fear of death

Photo by JF Martin on Unsplash

As we age, the transference to parents, or parent figures, is ultimately crushed by the reality that they too are mortal, not godlike. This comes about as they visibly age or die, just like any other animal.

In many people, the passing on of a parent triggers a profound watershed moment. Nervous breakdowns, panic attacks, complete identity upheavals, are all common around this event.

For ourselves, we are reminded too by the mirror, by our growing list of injuries and illnesses and by age related events (e.g. menopause).

The knowledge that this finitude exists in ours and their lives is what then drives us to another form of godlike outlet; our pursuit of heroism through our ‘purpose’.

What purpose compels an Edison, or a Michelangelo, is the same as what compels you or I. It is to leave some kind of positive legacy by which we may be remembered, through which our identity becomes immortal.

Becker gives a wonderful example of how this was evident in the great psychologist himself, Freud.

Sigmund Freud — Wikimedia Commons

Freud‘s disciple was Carl Jung, who was some 19 years his younger. Freud intended that his legacy would continue through Jung. In essence, Jung was a vehicle, an acolyte who would carry the flame for Freudian psychology beyond his grave.

One day Freud and Jung were debating, and Jung forcefully disagreed with him on a vital part of Freud’s theory.

Freud became animated.

He then fainted.

Why would Freud faint?

He did so because the very vessel of his life’s work, the thing that would take his identity to live beyond his death, was staring him in the face disagreeing with his views. This was equivalent to death - death of everything Freud represented, death of his purpose, and the passing of his identity into historic obscurity.

The relationship of Becker to the future of AI

Photo by Drew Beamer on Unsplash

If you use the Becker paradigm of our pursuits and passions being connected to heroic attainment of immortality , then two things become clear:

  1. Humans will pursue technological development to be renowned for how they evolved it; i.e. people want to work on AI or similar technologies because they want to be remembered (indirect immortality) for pushing forward momentous/world changing things; ala Steve Jobs, Edison and so on.
  2. Humans will pursue technological development because they wish it to directly enable their immortality, not just through being ‘remembered’ for their contribution, but through technical means prolonging their conscious/mental and/or physical life indefinitely.

Point no. 1 goes almost without saying, as does any technological advance or mission individuals consider worthy. A worthy mission is one which has a legacy of cultural value which lives beyond your finite body.

It is stereotypical to say the bringing about of giant leaps in technological capability is caused through tortuous hours of thought and ideation, by people who are passionately pursuing their life’s best work. Becker’s view was that they are doing so in the hope they can leave a memory of their heroism that transcends their physical lifetime.

You might feel that there is a great difference between an Edison and an average man in the above sense. Whilst this is objectively true, from their own, subjective viewpoints, both people are on their own journey to attaining immortality, just from different approaches and assumptions about how that is obtained.

For some, the ‘hanging off’ or ‘plugging in’ to an immortal thing, is achieved by contribution to a national culture (nationalism/patriotism), for others by their religious or spiritual identity (note how many religions centre on reincarnation, to which we will return later), and for others still attachment to a tribe or cult which has a doctrine and objectives which strive to bring immortality to the group’s achievements; think cult of personality movements, factions, pressure groups and so on.

However, we should reflect that as we age, the knowledge that we have not obtained or risk that we are not going to obtain immortality through our memory, weighs heavily on us and brings many people to watershed moments as they approach death, in the form of regret.

Famously, of course, the views of people on their deathbeds are consistent:

I wish I’d had the courage to live a life true to myself, not the life others expected of me

Point 2 (directly enable immortality) is the more interesting because it takes us to a place where AI is not a single machine separated from ourselves: if we want to prolong our own conscious life, in one physical vessel or another, then we do not need/want a separate artificial machine.

AI science fiction and research from the past 60 years and more have got us hooked on the concept of a single, terrifying AI entity commanding individual units like arms.

This is Hal from 2000: A Space Odyssey, or The Terminator or The Matrix.

Indeed, despite the brilliance of Life 3.0 by Max Tegmark, or Superintelligence by Nick Bostrom, there is an assumption here that seems central to the debate about AI ethics and their future impact on humanity. The assumption of separateness.

Whilst I believe those debates are valid, they are focused on an outcome that is still, despite ChatGPT and similar advances, fundamentally impossible by current means of AI, or those that are foreseeable. Artificial General Intelligence (AGI), which is human-like intelligence, is per our existing understanding of consciousness not attainable.

The pursuit of AI advance is indeed fast, but AGI is a different beast, it is not just pattern matching, learning or computation. Whilst those vectors may progress, the AGI destination can still elude us.

There are vectors which are critical on which we are not advancing, and for those which we are, what has maths shown us about exponential growth in all previous domains? that it plateaus and trails off. Even the most strongly predictable growth trends have shown signs of this, for instance the famous Moore’s law of microprocessor advance, which is slowing and may become obsolete.

But let me distract from that debate: because there are much more clear-cut, medium-term possibilities from AI that do not require AGI and yet present enormous societal, ethical, and human-existence threats.

Moreover, I argue people are more likely to pursue those variants of development because they show greater potential, near-term, to transform the lives of the pursuers.

Consider: if I have $1,000,000 or 10,000 hours to invest, do I invest it in something which is purely about cognitive power in a machine, or do I invest it in the potential to prolong life substantially or even indefinitely?

Doubtless, the investment will happen in both, and is already doing so; but averaged out over time, you can predict more investment in the latter because it is more likely to impact the profound desire inherent in us, which is to resolve the conflict with the body’s finite, ‘creaturely’ nature.

Indeed, you can also safely assume that institutions, governments, medical education and scientific research funds would do the same, meaning; on average, humanity will invest much more in life-extending technologies and life-augmenting technologies, than on separate, AGI-like capabilities.

If you wish to find a little evidence of some of the kinds of investments happening, there are many, but here is a small tip of the iceberg.

In short, this then takes us to the cyborg model of technology advance.

Why the Cyborg is the real issue

Photo by Possessed Photography on Unsplash

There are some central tenets/assumptions behind my position that the cyborg ought to be the focal point of debate, based on Becker’s concept of the death fear;

  1. Using current AI technology, with existing means of ML and supporting techniques, we are only a limited number of steps away from augmenting the mind directly with perceptively real-time data transfer to the brain. (Today data transfer happens via conventional means of devices with laggy, externalized data transfer to the brain, e.g. a mobile phone or VR headset)
  2. We are already enabling limbs and other bodily extensions which are robotic and digital but controlled by a human mind. These will continue to advance significantly in the coming years.
  3. Advances in biotech also mean we are a limited number of steps from being able to replace dying organs with robotic or artificially generated yet biological, healthy replacements.
  4. This indicates a limited number of years before the human brain/life can be extended far beyond our current lifetime, and may in fact become effectively near-limitless.
  5. The above developments then allow our godlike mind to persist, whilst the creaturely body deteriorates and is replaced entirely or in a piecemeal nature.
  6. None of this requires AGI.

Let me recap what this all means: we can see our way to a point in time where, without AGI, we will be able to augment or replace the living body such that the mind is able to continue far beyond our existing lifetimes.

This is fundamental, as it ‘resolves’ the primary duality in human existence, and the base of our neuroses. Becker’s Gordian knot is cut.

In turn, this eventuality causes a new set of ethical questions very different to those posed by AGI.

Near-term cyborg ethics

DALL-E — Cyborg considering earth
  1. Without death, how will the human psyche be altered? what purposes will we pursue, if not to achieve immortality through purpose (Becker’s explanation for the current human psyche)?
  2. If extended lifetimes are commonplace, what acts as the brake on over-population? is it managed through interventionist government policy, or through natural social / demographic and cultural movements?
  3. If the identity is not tied to body, then what persists identity? Today this debate circulates around gender and race identification prominently, but if the body/physical form are not relevant, then how is the identity persisted and known? For instance, consider nationality, citizenship and law which depend on an identifiable consistent form (the body/face/fingerprints/retina/DNA).
  4. Is the criminal act performed by the mind or the body? which form ought to be subject to punitive measures? If the brain/mind can be moved, then how is that tracked such that the responsible mind is penalized by law?
  5. If we can live 50–100 years more than we do currently, or perhaps longer/indefinitely, how does that impact property law and taxation? If people can live longer, then the opportunity to accumulate and consolidate wealth is proportionately greater. What is the impact on inheritance or economics such as the liquidity of money?
  6. In any field of skill (be it competitive sport or professional careers) is the playing field ensured to be even, or allowed to be uneven in terms of augmentation? Today in major sports events testing is done to ensure no performance enhancing drugs have been taken. Will a similar test framework exist to ensure augmentation is not being improperly used for advantage? Or will an augmentation-only path be opened, to allow competition between augmented individuals where the performance bar is higher?
  7. If our body or mind is benefiting from augmentation, is this then publicly known or private? if I am a politician running for office and have had augmentation (perhaps to extend my life by 20 or more years) then is there an obligation to disclose into the public domain or not? if it is discovered, having been kept unknown, is that an offensive act, such as secreting drug uses?

There are many more questions, this is really just a teaser to fuel further debate.

Summary

Typically the focus of revolutionary advances in technology are considered deeply from an economic perspective, and people forecast technological advance based on patterns of innovation from the past (e.g. Moore’s law).

However these interpretations often seem to be missing something fundamental; human psychology.

Human psychology is the primary driver behind the advances. Yes economics are vital, but the driver behind economics is — you guessed it — human psychology.

So what we’ve seen above is a psychology-based interpretation of where the advances will go.

I’d love to hear your take on what psychological understandings might also cause future change.

--

--

Caspar Mahoney
ILLUMINATION

Product leader, Technology strategist, Management Coach and Writer