Forget About Coding, The Job Of The Future Is Philosophy

Artificial Intelligence will bring four major problems in the near future. Only philosophy can come to the rescue.

Luca Rossi
Jun 10 · 17 min read

If you are worried that your children won’t find a stable job in the future, you may want to suggest them to become philosophers.

Yes, the least practical profession ever is destined to find a role in the job market for the first time in history. And it will even be the most important one.

Image for post
Image for post
Photo by Juan Rumimpunu on Unsplash

The Last Profession On Earth

This puts your profession too at risk. Or, as I said in a previous story:

Your profession has a deadline

You may think that your profession is safe because it’s very creative or requires a substantial amount of general intelligence.

Maybe you are a teacher. Or a scientist. Or a coder. So you must be safe, right?

Think again.

First, it was the turn of manual jobs. Robots became incredibly efficient and effective in performing manual jobs that formerly required at least the aid of human operators. Now machines can recognize defects themselves and the role of humans is mostly redundant.

This led every generation to become more educated and specialized than the previous one. While robots and AI stole our manual jobs, they opened up new possibilities for more creative professions.

Since the First Industrial Revolution, Luddism has been a thing. Ned Ludd was afraid of knitting machines because he thought that they would have meant the end of human jobs. So he started smashing them in 1779.

He was eventually unsuccessful in destroying all the machines. But his prophecy turned out wrong. People still found jobs.

Since then, every major invention caused new Luddite insurrections. But each time some jobs were eliminated, new ones were created. History teaches us that we shouldn’t worry about machines stealing our jobs.

Or should we?

I have two contrasting opinions about Neo-Luddism:

  • I think that it’s mostly a dumb idea. Yes, even if the prophecy of a jobless humanity never turned out true, eventually it will. But we must approach this problem by rethinking the role of work in our lives, not by destroying machines. Machines doing our jobs make our lives objectively easier. We shouldn’t need a job to give purpose to our lives, but we should use our extra time to enjoy life and realize our dreams.
  • On the other hand, I’m worried about what a perfect utopia could mean for our happiness. If anything worthwhile is done by machines, and there are no real challenges, is life even worth living?

This is a first hint about the importance of philosophy in the next decades. We will be facing major questions like the one you just read: how could we live a life without real challenges? I will come back to that later in this story.

AI is becoming increasingly better than humans in any profession. We are already seeing instances in which AI is better than doctors in making diagnoses and in some cases even predicting them.

Everyone is at stake:

  • Doctors
  • Journalists
  • Politicians
  • Judges
  • Teachers
  • Financial brokers
  • Detectives
  • Scientists
  • Entrepreneurs
  • Just about anyone I haven’t mentioned
  • Yes, even artists

Look at the following painting:

Image for post
Image for post
Photo by christies.com

This artwork was sold at an auction for $432,500. While I find the sum ridiculous anyway, the extraordinary thing is that the artist is a Generative Adversarial Network (GAN).

GANs are beautiful because their math is so simple, yet they are so powerful. I also work with them. GANs are a type of what I call “creative AI”, for an obvious reason. They can learn patterns in things and generate new data that reflect such patterns.

We often think of creativity as something magical, something ethereal, something unexplainable and unpredictable, but it’s not. Creativity is just finding patterns and creating new data within those patterns. This is exactly what a GAN does.

GANs have been used to create all kinds of art. GANs have often been used to create music, and when people listened to that music, they liked it and couldn’t tell that it was made by an AI. But when they were told the truth, they often changed their minds. They said that the music lacked soul and passion, while just a moment ago they claimed they liked it. What the hell? Did you just change your mind because you discovered the artist wasn’t human? And what do you even mean by soul and passion?

GANs can also be used to replace actors. This video of Robert Downey Jr and Tom Holland “starring” in Back to the Future always amazes (and kinda scares) me.

I think this is enough to convince you that any profession can be replaced, no matter how complicated it is. Most scientists and researchers (myself included) agree that eventually an Artificial General Intelligence (AGI) will be created. An AGI is an AI that, like humans, has a broad intelligence generalizable to any problem, not specific to a single domain (like diagnoses, paintings, music, etc.). We will come back to that later in this story.

There is just one profession that AI can’t possibly replace: philosophy.

Philosophy In Practice: Four Urgent Problems

The only way AI can help philosophers is by finding contradictions in theories. But this is different from finding answers. If you believe that killing animals is morally wrong but eating animals is a fundamental human right, your philosophy is contradictory. It’s not wrong because killing animals is indeed wrong or right, but because you are holding two opposite views at the same time.

But if your views are consistent, for example you are either a vegan or a non-hypocrite carnivore, no one can tell you if you are right or wrong. Not even an AI. Simply because there are no universal rules on what’s right and wrong.

Of course, this was an over-simplification, but philosophy is full of more nuanced contradictions that are difficult to detect.

In the near future, we will have an urgency to give answers to philosophical questions. Some of them are recent, others have been there for millennia. We can group them into four broad categories.

Philosophy Problem #1: The Morality Line

Image for post
Image for post
Photo by Justin Luebke on Unsplash

Philosophy has already been put into practice to solve some ethical and moral problems caused by AI. One example is self-driving cars.

Self-driving cars are wonderful. They are incredibly safer than human-driven cars (yet another thing AI is already better than us). But when they do fail, they cause much more complicated problems.

Let’s start with the “easiest” ones, the legal ones. Imagine being involved in an accident with a driverless car. Who do you sue? The owner? The seller? The car manufacturer? The algorithm producer? The road designer? The AI itself (if it gains juridical rights from future legislations)? No one?

Now a more complex scenario.

Imagine that a self-driving car finds itself in a situation in which, just a moment before an unavoidable accident, has to choose between two outcomes: either save a child or a 90-year-old. Who should the AI save? While most people would agree that the best choice is to save the child, there are countless more complicated scenarios:

  • 1 child vs 10 90-year-olds.
  • 1 child vs 10 drug dealers.
  • 1 child vs 10 reformed killers.
  • 1 child vs 100 reformed killers.
  • 1 child vs 1 puppy.
  • 1 child vs 100 puppies.
  • 1 child vs 2 adults with cancer.
  • 1 person vs a house.
  • 1 person vs a hospital.
  • 1 healthy woman vs 1 pregnant woman with cancer.

These are all instances of the trolley problem. There are no correct answers to trolley problems. Most tests just reveal your inconsistencies, but the correct answers only depend on your views on morality.

If 10 years ago trolley problems were just fun riddles, now they are serious problems that need to be solved, and people are getting paid to do that.

Notice that I’m using the word “solve” improperly, since these kinds of problems have no correct answer, and can’t be solved by definition.

In 2014 researchers at MIT created the Moral Machine, an experiment designed to address these problems by asking participants to give their own answers. Anyone can take part in this experiment, even you. They even designed a COVID-related trolley problem.

I think that using crowd wisdom to answer philosophical questions is not a bad idea at all. In fact, I think this is the perfect application of democracy.

I don’t want to sound controversial, but nowadays we abuse the power of democracy. We use it as an excuse to make decisions based on opinions while most of them should be based on facts. Real democracy should be used to address only moral and philosophical values, where facts can’t be used to answer questions. But this is another story.

Back to our trolley problem. Just in case it wasn’t complicated enough, there are situations in which, even if the correct answer is easy, the most moral choice is not the best one.

Take, for example, the following problem. Who would you save: 10 people or the driver of the car that has to make the decision? Most people would agree that saving 10 people is the best choice. But who in their right mind would buy a car ready to sacrifice them?

Actually, this problem is easier, from a game theory perspective. People would either buy a normal car, resulting in more overall deaths, or buy a car from a different manufacturer which is more kind to the driver. The laws of economics would drive (no pun intended) car companies to produce cars that prioritize the driver’s life over anyone else’s. To this kind of problem, the correct answer (the Nash equilibrium) is not the morally optimal one.

Self-driving cars are just one example of innovation that will make the morality line more blurred in the following years and decades. Take genetics as another example. We already have the technology to build humans with desired characteristics. If we wanted, we could mess with our genes and even change our brain structure. In other words, we may become something that is not even human anymore.

The more our technology advances, the blurrier most lines get. In the past, it was easier to distinguish humans from other species, right from wrong, sentient from non-sentient. Now everything is about to become a huge mess. Philosophers are the only ones who can sort through this mess and draw finer lines in our blurry world.

Morality is not the line we need to unblur in the near future. There is another one that is even more confusing: consciousness.

Philosophy Problem #2: The Consciousness Line

Image for post
Image for post
Photo by Paul Hanaoka on Unsplash

Ethics and morality pose unanswerable problems because they are human constructs. They don’t actually exist.

But there are other things that actually exist, yet they still pose unanswerable problems. Consciousness is one of them.

Sometime in the future, we will probably build the technology to teleport and upload minds. Using these technologies can either be a huge step towards a utopia, or be our doom without us even realizing it.

In theory, teleportation of an object P from location A to location B works by destroying P’s particles in location A and copying them in location B. No problems here.

But what if we teleport a conscious being? Let’s say a person P in location A (PA) is teleported in location B (becoming PB).

From an outside observer, everything would be okay. PA and PB are the same person, who continues to live as usual after being teleported. PB would retain the same set of PA’s memories and would claim to be the same person as before.

But would that be true? If you were PA, would you become PB, or cease to exist? Or, just to make the problem even messier, if the machine in location A stops working and two copies of you are in the world, which one would be you?

Most laypeople would say that you would always be PA, while PB would be just like a clone. If the machine in location A stopped working, teleportation would kill you, just like every copy of you each time they teleport.

But reality may not be that simple.

Scientists find it difficult to believe that consciousness is linked to your particles. After all, you are made of the same particles that build everything in nature. It’s more likely that consciousness resides in the way your atoms are structured.

We should look at a higher abstraction layer, the layer of neurons and synapses. Atoms in neurons are replaced constantly, yet you are still conscious. Neurons die all the time as new neurons are born (as opposed to the popular belief that you don’t get new neurons after birth. It’s true that they don’t reproduce, but new ones are always created in a process called neurogenesis). Yet you are still conscious.

If this reasoning makes sense, then a neuron could be made up of anything, as long as it works like a neuron. This is why most scientists believe that neurons don’t even have to be physical, they can be made of pure information that can be elaborated by calculators.

If this is true, mind uploading would make sense too: you should theoretically be able to digitalize yourself while retaining your consciousness, that could be even carried in a USB stick.

Like teleportation, what would happen if you backed up the content of that USB stick, making a copy of yourself?

Let’s crossover with the previous problem:

  • Would you rather save a biological 90-year-old or a USB child?
  • Would you rather save a biological 90-year-old or 1000 USB children?
  • Would you rather save a biological 90-year-old or 1000 copies of the same USB child?

If you are having a bad trip, don’t worry, it’s normal. I’m tripping hard. If you want to trip even harder, there are tons of thought experiments on consciousness that will fuck you up so bad. You can start with the Chinese Room and the Boltzmann Brain. Most of them are so contradictory that they actually prove something: consciousness shouldn’t exist at all. Yet there you are.

Maybe consciousness problems aren’t as urgent as trolley problems, but they will be. It’s just a matter of some decades before technologies like teleportation and mind uploading may be available. Long before that, we will need some answers from brain scientists, but most importantly from philosophers.

Philosophy Problem #3: The Meaning Of Life

Image for post
Image for post
Photo by Ahmed Zayan on Unsplash

Pretend for a moment that everything I said before works out just fine and we all live happily ever after.

How wonderful would that be?

We humans are complicated, messy and whiny. I wouldn’t be so surprised if we managed to fuck up even a perfect utopia.

As before, let’s start easy and then gradually fuck our minds harder as we proceed.

One of the reasons Luddites and Neo-Luddites exist is that, obviously, if machines steal their job they won’t have an income and become poor.

But there is another deeper reason.

We need to work. A job is one of the most meaningful things we do. We all crave meaning. We can’t agree on the precise definition of happiness, but we can all agree that a happy life is a meaningful one. Nobody wants to be useless, or worse, less useful than a machine.

That’s not all. What if you like your work? Maybe you love making diagnoses as a doctor, how would you react the day AI will start making diagnoses for you? It already does, it’s just not that common yet.

This problem doesn’t even have to limit to work. Let’s get back to driverless cars. One of the main critiques, besides people not really trusting AI yet, is that most people like driving.

This reason won’t be enough to stop driverless cars to dominate our streets in the following decades: saving lives is more important than the pleasure of driving. But does it mean that we eventually will have to give up every pleasure to live a safe, comfortable and boring life?

Probably not. People in the 21st century like driving cars just like people in the 19th century liked riding horses. When cars replaced horses as a means of transportation, riding horses became a sport and even a luxury. I think that the same will happen to cars.

In general, most sports are created to replace former work activities: hunting, fishing, running, fighting are some examples. Other sports, like football, basketball, even chess, are a metaphor for war, in an age when, fortunately, we don’t have much real war to do anymore.

Does it mean that everything we do now to survive will become a sport in the future? I think that this is very likely. My doubt is whether we will be happy about it.

Simulated challenges make us feel good, but they don’t make us feel fulfilled. You may argue that some athletes are happy doing their simulated challenge, or sport. But agonistic sport is actually a real challenge because athletes strive to be number one.

In a utopia, nobody can be number one. We wouldn’t want a utopian world to be unfair.

If everybody has equal opportunities, if everybody can overcome their laziness with pills, if everybody can fix their genetic makeup, we lose all the unpredictability of a messy world that can make an athlete, or anybody else, happy.

So, should we make our utopia purposefully unfair, like a capitalistic system? I don’t think that would be a good idea.

A utopian world can be neither capitalist nor communist. It doesn’t make any sense to talk about capitalism and communism in a world in which every service is provided by an AI. We would need to think of a model for society that hasn’t been imagined yet.

And who are the best people to think about a new model for society? Exactly, philosophers. Even before politicians, sociologists and economists.

So, how can we be happy in a perfect world? Here are some possible answers:

  • We shouldn’t know that simulated challenges are simulated. Here’s another mindfuck: what if your current life is actually a videogame and you are playing as a 12-year-old in a utopian future? What if you have experienced many lives before, and will continue to experience many others because, you know, you are 12 and addicted to videogames? Would we want it? Is disillusionment the path to happiness?
  • We should do something to ourselves, like very powerful drugs or genetic engineering, that make us super-happy all the time. Would we want it? Is being a junkie the path to happiness?
  • We should practice meditation, as in being in the present moment and observing life without reacting to it. Would it be enough? Is just existing the path to happiness?
  • We should strive to unravel all the mysteries of the universe. Would it be enough? Are answers the path to happiness?

Future philosophers, you have so much work to do.

Philosophy Problem #4: The Alignment Problem

Image for post
Image for post
Photo by Alex Knight on Unsplash

Now, let’s get back from La La Land to our world full of issues.

There are many ways we can screw up instead of getting to La La Land. Presumably, our stairway to heaven includes an Artificial Super Intelligence (ASI), a super-smart AGI.

The most likely prediction is that an AGI will develop itself recursively. One of its goals would be to always become smarter. At some point, it will be an ASI. An ASI is an AGI that is many degrees smarter than a human. While an AGI can be as smart as a chimp, as long as its intelligence is general, an ASI will be so smart that comparing it to a human would be like comparing a human to an ant.

Now, when something is smarter than you to the same degree you are smarter than an ant, several bad things can happen.

Before you think about Skynet and rebellious self-aware robots, let’s define intelligence. There are many definitions for this word, and I will give you one that I don’t intend to be official, its only purpose is to help me make a point.

Intelligence — The ability to solve complex problems rapidly and efficiently.

That’s it. By that definition, it follows that something intelligent doesn’t have to be moral, conscious or sentient, it’s just good at solving problems.

Humans happen to have evolved both intelligence and a sense of morality. It doesn’t mean that every intelligent thing has to be moral.

With that definition in mind, when you pose a problem to an AGI, it will find the most efficient solution:

  • Whatever means necessary within the specified constraints.
  • Finding loopholes if it means more efficiency.

The major problem is that we won’t give the ASI all the necessary constraints, simply because we aren’t smart enough to find all of them.

Welcome the Alignment Problem, or: how do we tell an ASI not to kill us? It’s more difficult than it seems, so difficult that every solution that has been proposed so far has some major flaws.

To make you understand why it’s so difficult, imagine that: how would ants ask us not to kill them? Exactly, they couldn’t. Even if ants did know that we wanted to kill them, they could do nothing to stop us, except trying to run in vain.

But why would an ASI kill us? Because, since its goals are not aligned with ours, it may happen that the most efficient way to reach some goal involves doing something that hurts us as a side effect.

Think about it. You step on ants by mistake, you don’t kill them on purpose.

ASI is not evil. It just doesn’t have a sense of morality. It’s amoral. Killing us is just a nasty side effect.

One famous example is the paperclips thought experiment. If you give an ASI the goal to produce as many paperclips as possible, it will use all the resources it encounters, including the atoms in your body.

You may limit the number of paperclips to produce, but in order to become more efficient, it will try to transform every atom in the universe into computational power, including your body.

Well, just tell it not to kill us already! Use the 3 rules of robotics or whatever!

Telling an ASI not to kill us or to make us happy won’t work. Using the 3 rules of robotics won’t work. The reason lies in problems #1, #2 and #3 described before. If we can’t even define ourselves what morality, consciousness and happiness mean, how could we explain it to an ASI?

Remember, we are talking with an amoral super-smart being that finds loopholes in everything. It would already be easy for it to find loopholes within well-defined constraints, let alone within messy and blurry lines!

Some possible solutions have been proposed. One of the most prominent is the Coherent Extrapolated Volition (CEV) by Eliezer Yudkowsky. According to Yudkowsky, an ASI should have as its main goal the realization of the CEV, which is defined as follows:

Coherent Extrapolated Volition — Choices and actions people would collectively take if “we knew more, thought faster, were more the people we wished we were, and had grown up closer together”.

Kinda nice, right? Sadly, there are some flaws with this approach, that I won’t discuss here because I would go too much off-topic. But after all, no matter how perfectly defined a goal could be, a super-smart being could always find some loopholes we couldn’t even imagine. We are just ants to it.

All these ideas about the Alignment Problem are better elaborated in Superintelligence by Nick Bostrom.

We are just ants, this means that we should work as hard as possible on this problem. We have to find the best philosophy to encode into the ASI, years before the ASI will actually exist. To do that, we need to find answers to the previous three problems first.

Most scientists believe that an AGI, and then an ASI, will be created within this century. The clock is ticking.


Morality, consciousness, happiness and alignment will be the four major issues that will determine whether we will transform our world into a utopia or destroy it.

The fate of the world is in the hands of philosophers. In a few decades, philosophers will be regarded in the same way Silicon Valley coders are regarded now.

I don’t know what we will come up with to address these problems, but one thing I know for sure: if your child decides to study philosophy, he or she will have a stable job for life.

EDIT: the title is ironic, like many other things I said. I’m a coder myself, learning how to code is still a good idea.

I, Human

What does it mean to be human, how to build a better world

Sign up for Hi, Human

By I, Human

The best ideas for a better human world Take a look

By signing up, you will create a Medium account if you don’t already have one. Review our Privacy Policy for more information about our privacy practices.

Check your inbox
Medium sent you an email at to complete your subscription.

Luca Rossi

Written by

On a quest to build an AGI that doesn’t destroy us. Sharing ideas to improve ourselves and the world.

I, Human

I, Human

A collection of ideas and new perspectives on what does it mean to be human, how to be a better human, and how to build a better world.

Luca Rossi

Written by

On a quest to build an AGI that doesn’t destroy us. Sharing ideas to improve ourselves and the world.

I, Human

I, Human

A collection of ideas and new perspectives on what does it mean to be human, how to be a better human, and how to build a better world.

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch

Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore

Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade

Get the Medium app