End of Life as We Know It

Tadeáš Peták
Oct 19, 2018 · 8 min read

In a world of synthetic minds with no bodies, bionic bodies with multiple minds, and genetically modified humanoids, what does a “we” stand for? Will the AI overlords care about us, or will we be just pesky humans to them?

Image for post
Image for post

To celebrate his honorary doctor’s degree from Malmö University, Hampus Jakobsson gave a lecture titled “End of Life As We Know It.” He spoke about cultural shifts that might occur, optimising for the right output, and asked a whole lot of relevant questions. Here’s my take on it:

(I use italics for my own thoughts to avoid confusion wherever necessary.)

In the beginning…

Maybe that has transpired, but would you give up your ability to write? Would you give up books? Hell no, right?! No matter what detrimental impact they might be having on my mind, I am much too fond of all their sublime effects.

As “history may not repeat itself, but it does rhyme,” Hampus started by looking back; at the old inventions and how natural they seem now; at the new inventions and how quickly we have assimilated the superpowers they grant us, flying all over this blue planet, talking to anyone around the world, the ability to see places outside our solar system in real-time. They seem so ordinary nowadays that we often forget how recently they have cropped up.

He mentioned the fact that 150 years ago, everyone found it preposterous that women should have the right to vote. Women themselves in general did not believe they should vote! At this point, it sounds insane they shouldn’t.

In other words, we have been scared of new inventions before. We have found ideas that seem commonplace now completely nuts before. And we are extremely competent at this game of adaptation.

What lies ahead

It’s 2047. Ageing is no longer a concern. You get a shot every 10 years, constantly looking like a 25-year old, your mind and body working at their best.

It’s 2054. Your boss is a synthetic mind on a silicone substrate, the philosophical faculty at the local university is run by a gamut of synthetic minds in a single biological body, the post office has been entirely taken over by biological minds inhabiting bionically enhanced bodies, and your kids — albeit naturally born — are genetically modified, both to alleviate your’s and your partner’s genetic shortcomings and enhance your progeny.

(I love that Hampus brought some classics in, too, observing that Dante spoke about “bodies dissolved” in the purgatory in his Divine Comedy. If there’s no body but the attention still exists, the mind must have been uploaded somewhere.)

It’s 2061. Everything’s been digitised. We are creating brand new organisms from scratch and rewriting the source code of the existing ones, copy pasting alleles of DNA with 100% accuracy.

Mind you, these are arbitrary dates I have thrown in, but Hampus did bring up all these thought experiments and interspersed them with a motley of thought-provoking questions:

  • In 2039, should Josh, the sentient bot who helped Anne not flunk her math exam, have a right to vote?
  • When time is of no essence anymore, can you go to hybernation when your nightmare-of-a-political-candidate wins the election, and ask someone to wake you up when it’s over? Also, will we have a much tougher time rooting out old stereotypes once people have stopped dying?
  • What happens to the pronoun “we” in 2054 when a substantial number of members of our society don’t inhabit the physical world? Incidentally, do we consider them members of “our” society at all?
  • What happens to equality with access to genetic therapies and enhancements in 2061? If someone’s superior to me, do I have a right to have my genes improved? Or should their genes be made inferior? Btw, should everyone be genetically modified to be more empathetic?!

Not your Hollywood blockbuster

Another outlook you don’t hear every day is the notion of a paperclip maximiser concocted by Nick Bostrom: Imagine a paperclip factory and a bunch of brilliant engineers. They have just created an AI whose goal it is to maximise the amount of paperclips produced and sold.

The first day, it’s celebrations galore as the AI has optimised the production line in an ingenious way, doubling the output. Next day, it’s an even bigger party because the AI has optimised the supply and distribution chains! On the third day, everyone’s dead. Wait, what? Well, your red blood cells contain iron which is obviously a resource vital for the production of paperclips, and all the humans were kind of in the way of solving this paperclip riddle anyway.

This is not evil, Terminator, or Matrix-style AI annihilating humans. Instead, it’s a cost function that we deliberately created brought to ad-absurdum. As Hampus says, it’s much more likely that we’ll kill each other with AI, rather than AI killing us of its own accord.

Optimise. But for what?

What do I optimise for in my life? In the short term, it seems to vary significantly. When I zoom out, the best answer I can come up with long-term is “balance.” That’s an extremely moving target. And it should be, right?

Hampus keeps questioning the direction of optimisation in a broad variety of contexts: What do you optimise for at work, within your family or community, in life? What should we optimise for as a society?

Since at this point, any AI “simply” minimises a cost function, it’s essential to search for answers to all the permutations of this question if we want to build friendly technology with universal good in mind.

What about us

Both of these scenarios seem unlikely. The reality might turn out to lie somewhere in between, like us creating an intellectually superior AI that sees us the way we view ants. Why care about these pesky humans unless they are crawling into my picnic basket?

Why the fear?

What’s humanity, anyway? Here, Hampus referred to the Theseus’ paradox, i.e. the philosophical question of when an object ceases to be itself as its parts are slowly removed or exchanged. If you have an axe and you change its handle, is it still the same axe? If you use it for a while and then change its head, is it still the same axe?

If I use my phone for increasingly more tasks, and then upload a chunk of my childhood memories into the cloud to never lose them, and then get a bionic leg because, well, it’s so much better than the one I was born with, is it still me that’s writing this post? If so, am I still human?

Hampus’ take on this is that we will transition into the brand new world without ever noticing.

How do you live then, armed with all this knowledge?

  1. Protect your mind. Protect your mind like it’s your property. Protect your attention, too. This bears an uncanny resemblance to the final advice Yuval Harari gives in this brilliant interview on the future & tech: “Know thy self.” These days, it’s more relevant than ever because you have a competition — that supercomputer pointed at your brain every time you open a browser.
  2. Seek knowledge. He contrasted this intent with seeking belonging or striving to win arguments. That is to say, belonging, just like happiness, is not something you aim for. It comes as a by-product of you focusing your energy in the right direction.
  3. Be in the moment. Don’t allow negative emotions to drive your actions.
  4. Don’t compare yourself to others. There’s no point.
  5. Be kind to your tomorrow self. Hampus talked about how washing the dishes in the evening might sound boring, but in the morning, Tomorrow Hampus is going revel in the awesomeness of Yesterday Hampus, who has left the kitchen spotless. I love this.

Wait! So, what on Earth lies ahead?

What the dystopian novel really failed to predict was the societal shift where we don’t seem to mind surveillance. We carry and charge our monitoring devices voluntarily, often feeling that our status is somehow connected to what particular model we have, plenty of us spending significant amounts of our income on them, some of us willing to queue for the latest version for hours. We voluntarily expose huge parts of our lives, giving up much more information than anyone could obtain by watching us.

This is the biggest discrepancy between Orwell’s disturbing picture of the future (past?) and our reality: his protagonists hate being watched, while we don’t care. Or, at least not enough to opt out.

I believe this is what Hampus means when he points out that cultural-societal shifts are way harder to predict than technological ones. He has an extremely valid point. We simply don’t know. But in my estimation, he’s marvellous at making educated guesses.

Predict

where the future is written

Sign up for Predict Newsletter

By Predict

Monthly updates on science and technology shaping our future. Take a look.

By signing up, you will create a Medium account if you don’t already have one. Review our Privacy Policy for more information about our privacy practices.

Check your inbox
Medium sent you an email at to complete your subscription.

Medium is an open platform where 170 million readers come to find insightful and dynamic thinking. Here, expert and undiscovered voices alike dive into the heart of any topic and bring new ideas to the surface. Learn more

Follow the writers, publications, and topics that matter to you, and you’ll see them on your homepage and in your inbox. Explore

If you have a story to tell, knowledge to share, or a perspective to offer — welcome home. It’s easy and free to post your thinking on any topic. Write on Medium

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store