Published in


End of Life as We Know It

In a world of synthetic minds with no bodies, bionic bodies with multiple minds, and genetically modified humanoids, what does a “we” stand for? Will the AI overlords care about us, or will we be just pesky humans to them?

To celebrate his honorary doctor’s degree from Malmö University, Hampus Jakobsson gave a lecture titled “End of Life As We Know It.” He spoke about cultural shifts that might occur, optimising for the right output, and asked a whole lot of relevant questions. Here’s my take on it:

(I use italics for my own thoughts to avoid confusion wherever necessary.)

In the beginning…

When writing became ubiquitous, the intellectual elite was scared. Surely, our memories, and, subsequently, minds would deteriorate since we wouldn’t need to commit anything to memory anymore.

Maybe that has transpired, but would you give up your ability to write? Would you give up books? Hell no, right?! No matter what detrimental impact they might be having on my mind, I am much too fond of all their sublime effects.

As “history may not repeat itself, but it does rhyme,” Hampus started by looking back; at the old inventions and how natural they seem now; at the new inventions and how quickly we have assimilated the superpowers they grant us, flying all over this blue planet, talking to anyone around the world, the ability to see places outside our solar system in real-time. They seem so ordinary nowadays that we often forget how recently they have cropped up.

He mentioned the fact that 150 years ago, everyone found it preposterous that women should have the right to vote. Women themselves in general did not believe they should vote! At this point, it sounds insane they shouldn’t.

In other words, we have been scared of new inventions before. We have found ideas that seem commonplace now completely nuts before. And we are extremely competent at this game of adaptation.

What lies ahead

It’s 2039. Anne is taking a course at a VR University of Bangalore in applied mathematics. She’s having a hard time conquering differential equations, so she asks Josh to give a her hand. What she doesn’t know — or maybe she does? — is that Josh is a bot, taking a course in empathy at that same uni. He’s there to crack one of the hardest nuts: human emotions. What’s different this time is that, as opposed to being her servant in 2020, now he’s her peer.

It’s 2047. Ageing is no longer a concern. You get a shot every 10 years, constantly looking like a 25-year old, your mind and body working at their best.

It’s 2054. Your boss is a synthetic mind on a silicone substrate, the philosophical faculty at the local university is run by a gamut of synthetic minds in a single biological body, the post office has been entirely taken over by biological minds inhabiting bionically enhanced bodies, and your kids — albeit naturally born — are genetically modified, both to alleviate your’s and your partner’s genetic shortcomings and enhance your progeny.

(I love that Hampus brought some classics in, too, observing that Dante spoke about “bodies dissolved” in the purgatory in his Divine Comedy. If there’s no body but the attention still exists, the mind must have been uploaded somewhere.)

It’s 2061. Everything’s been digitised. We are creating brand new organisms from scratch and rewriting the source code of the existing ones, copy pasting alleles of DNA with 100% accuracy.

Mind you, these are arbitrary dates I have thrown in, but Hampus did bring up all these thought experiments and interspersed them with a motley of thought-provoking questions:

  • In 2039, should Josh, the sentient bot who helped Anne not flunk her math exam, have a right to vote?
  • When time is of no essence anymore, can you go to hybernation when your nightmare-of-a-political-candidate wins the election, and ask someone to wake you up when it’s over? Also, will we have a much tougher time rooting out old stereotypes once people have stopped dying?
  • What happens to the pronoun “we” in 2054 when a substantial number of members of our society don’t inhabit the physical world? Incidentally, do we consider them members of “our” society at all?
  • What happens to equality with access to genetic therapies and enhancements in 2061? If someone’s superior to me, do I have a right to have my genes improved? Or should their genes be made inferior? Btw, should everyone be genetically modified to be more empathetic?!

Not your Hollywood blockbuster

It’s not these versions of the future that movies inundate us with. Why is that? Most likely because they are too complex, not black & white enough, not completely apocalyptic, and some of them might even make us think.

Another outlook you don’t hear every day is the notion of a paperclip maximiser concocted by Nick Bostrom: Imagine a paperclip factory and a bunch of brilliant engineers. They have just created an AI whose goal it is to maximise the amount of paperclips produced and sold.

The first day, it’s celebrations galore as the AI has optimised the production line in an ingenious way, doubling the output. Next day, it’s an even bigger party because the AI has optimised the supply and distribution chains! On the third day, everyone’s dead. Wait, what? Well, your red blood cells contain iron which is obviously a resource vital for the production of paperclips, and all the humans were kind of in the way of solving this paperclip riddle anyway.

This is not evil, Terminator, or Matrix-style AI annihilating humans. Instead, it’s a cost function that we deliberately created brought to ad-absurdum. As Hampus says, it’s much more likely that we’ll kill each other with AI, rather than AI killing us of its own accord.

Optimise. But for what?

Capitalism is the epitome of a paperclip maximiser, chasing boundless exponential progress in a finite system. Despite all its drawbacks — such as climate change, depleting resources, and leaving tonnes of toxic waste all around — we have opted in and don’t seem to be opting out any time soon.

What do I optimise for in my life? In the short term, it seems to vary significantly. When I zoom out, the best answer I can come up with long-term is “balance.” That’s an extremely moving target. And it should be, right?

Hampus keeps questioning the direction of optimisation in a broad variety of contexts: What do you optimise for at work, within your family or community, in life? What should we optimise for as a society?

Since at this point, any AI “simply” minimises a cost function, it’s essential to search for answers to all the permutations of this question if we want to build friendly technology with universal good in mind.

What about us

Maybe we puzzle out the optimisation riddle, in which case tech may solve all our issues. We might accidentally — or intentionally—create an evil AI that will wipe out all of humanity in a blink of an eye.

Both of these scenarios seem unlikely. The reality might turn out to lie somewhere in between, like us creating an intellectually superior AI that sees us the way we view ants. Why care about these pesky humans unless they are crawling into my picnic basket?

Why the fear?

Our fear of what’s to come might stem from a simple wish for tomorrow to be pretty much like today. Or is it that we don’t want to lose our supremacy on Earth? Our self-declared supremacy as most intelligent beings, intelligence as defined by us. If we, on the other hand, fear for our existence, we should be asking whether there’s a moral imperative to save humanity.

What’s humanity, anyway? Here, Hampus referred to the Theseus’ paradox, i.e. the philosophical question of when an object ceases to be itself as its parts are slowly removed or exchanged. If you have an axe and you change its handle, is it still the same axe? If you use it for a while and then change its head, is it still the same axe?

If I use my phone for increasingly more tasks, and then upload a chunk of my childhood memories into the cloud to never lose them, and then get a bionic leg because, well, it’s so much better than the one I was born with, is it still me that’s writing this post? If so, am I still human?

Hampus’ take on this is that we will transition into the brand new world without ever noticing.

How do you live then, armed with all this knowledge?

Here’s Hampus’ recipe for a fruitful personal now, as well as the future:

  1. Protect your mind. Protect your mind like it’s your property. Protect your attention, too. This bears an uncanny resemblance to the final advice Yuval Harari gives in this brilliant interview on the future & tech: “Know thy self.” These days, it’s more relevant than ever because you have a competition — that supercomputer pointed at your brain every time you open a browser.
  2. Seek knowledge. He contrasted this intent with seeking belonging or striving to win arguments. That is to say, belonging, just like happiness, is not something you aim for. It comes as a by-product of you focusing your energy in the right direction.
  3. Be in the moment. Don’t allow negative emotions to drive your actions.
  4. Don’t compare yourself to others. There’s no point.
  5. Be kind to your tomorrow self. Hampus talked about how washing the dishes in the evening might sound boring, but in the morning, Tomorrow Hampus is going revel in the awesomeness of Yesterday Hampus, who has left the kitchen spotless. I love this.

Wait! So, what on Earth lies ahead?

Well, look at 1984. Sure, Orwell’s idea of each room in a household equipped with a screen that allows the Big Brother to watch our every second seems almost amusing these days— tech has been scaled down, your smart phone having 2000x the power of the computer that landed Apollo on the moon, let alone anything that the 1940s featured. But, bulky cameras are a minor, pernicious implementation detail.

What the dystopian novel really failed to predict was the societal shift where we don’t seem to mind surveillance. We carry and charge our monitoring devices voluntarily, often feeling that our status is somehow connected to what particular model we have, plenty of us spending significant amounts of our income on them, some of us willing to queue for the latest version for hours. We voluntarily expose huge parts of our lives, giving up much more information than anyone could obtain by watching us.

This is the biggest discrepancy between Orwell’s disturbing picture of the future (past?) and our reality: his protagonists hate being watched, while we don’t care. Or, at least not enough to opt out.

I believe this is what Hampus means when he points out that cultural-societal shifts are way harder to predict than technological ones. He has an extremely valid point. We simply don’t know. But in my estimation, he’s marvellous at making educated guesses.




where the future is written

Recommended from Medium

Wild Cards: What They Are and How to Use Them in Futures & Foresight

What went wrong?

Cardashift — the launchpad to accelerate environmental and social transition

Exploring the connection and human potential with Tim McLain

Human knowledge as a danger

Creating Better Futures through Collective Intelligence

Energy Week 2021 Wish List Part 2


Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Tadeáš Peták

Tadeáš Peták

Building a tiny house when not coding. Huge fan of yoga, books, and the outdoors.

More from Medium

The Hidden Biases keeping Diverse Employees From Success

Uniqueness Bias: What It Is, How to Avoid It

Remind the tech gender gap

Is it really Tech/Data/AI for good?