Q&A: “Sapiens”-author Yuval Noah Harari on the future of mankind, AI and more

Rasmus Romulus Palludan
10 min readApr 21, 2017

I did a Q&A with the israeli historian Yuval Noah Harari, author of the international bestseller “Sapiens: A Brief History of Humankund” and “Homo Deus”, for the danish daily Jyllands-Posten.

In the Q&A you’ll find:

  • Harari explains why he thinks “Sapiens” became such a huge succes.
  • Harari tells what he thinks is the most important lesson for humans living today.
  • Why his new book, “Homo Deus”, — in his opinion — is an important read.
  • How it was different working with the future (Homo Deus) as opposed to working with the past (Sapiens)?
  • Why humans might give up meaning in exchange for power in the future.

Your book “Sapiens: A Brief History of Humankind” has been translated to 40 langauges and been recommended by people like Mark Zuckerberg, Bill Gates and Barack Obama. If you had to guess: What is it, in our culture today, that made that book rise to be such a meteoric success?

I think the book has had such a positive reception because it answers a real need. We are living in a global world — but most schools and books still tell us only parochial histories of one particular country or culture. The truth is that there are no longer any independent countries in the world. Our planet may be divided into about 200 different countries, but they all depend on global economic, ecologic and political forces. The main problems we are facing are also global in nature. What might happen when global warming causes radical climate changes? What might happen when computers replace humans in the job market, and most humans become economically useless? What might happen if breakthroughs in biotechnology will make it possible to upgrade humans, and to open huge gaps between rich and poor? These are questions that all humans need to face today, and no country can solve them by itself. In order to solve them, we need a global understanding of the identity and history of humankind. I think Sapiens has been so well-received because it tells the history of humanity as a whole, and because it does so from a global rather than from a local perspective.

What is in your opinion one of the most important lessons that we, humans living today, can learn from studying our ancestors and our history?

Maybe the most important lesson to learn is that even though humans are extremely good in acquiring power, they are not very good in translating power into happiness. Which is why we are today far more powerful than ever before, and our life is certainly more comfortable than in the past, yet it is doubtful whether we are much happier than our ancestors. Compared to what most people in history dreamt about, we may be living in paradise. But for some reason, we don’t feel the part.

One explanation is that happiness depends less on objective conditions and more on our own expectations. Expectations, however, tend to adapt to conditions. When things improve, expectations balloon, and consequently even dramatic improvements in conditions might leave us as dissatisfied as before.

At an even more fundamental level, the basic human reaction to pleasure is not satisfaction, but rather craving for more. Hence no matter what we achieve, it only increases our craving, not our satisfaction. This is why humankind has been so successful in conquering the world and acquiring immense power, but has not been successful in translating all that power into happiness. If we don’t change our basic mental patterns, than the power we will gain in the twenty-first century may well upgrade us into gods, but we will be very dissatisfied gods.

For people who don’t care about the future: Why do you think that “Homo Deus: A brief history of tomorrow” is important to read ?

Because it is really a book about our present choices rather than about future scenarios. For example, in coming decades the rise of artificial intelligence might make most humans useless. Computer algorithms are catching-up with humans in more and more cognitive fields. It is very unlikely that computers will develop anything even close to human consciousness, but in order to replace humans in the economy, computers don’t need consciousness. They just need intelligence. Intelligence is the ability to solve problems, whereas consciousness is the ability to feel things such as pain, joy, love and anger. Throughout history, intelligence always went hand in hand with consciousness. The only intelligent entities were conscious entities. The only ones who could drive vehicle, kill terrorists, and diagnose diseases were conscious human beings. But intelligence is now decoupling from consciousness. We are developing non-conscious algorithms that can drive vehicle, kill terrorists and diagnose diseases better than us. If you leave it to market forces to choose between intelligence and consciousness, the market will choose intelligence. It has no real need for consciousness. Once self-driving cars and doctor-bots outperform human drivers and doctors, millions of drivers and doctors around the world will lose their jobs, even though self-driving cars and doctor-bots have no consciousness.

What will be the use of humans in such a world? What will we do with billions of economically useless humans? We don’t know. We don’t have any economic model for such a situation. This may well be the greatest economic and political question of the twenty-first century. But this is a question we need to address today, not in 2040. Since we do not know how the job market would look like in 2040, already today we have no idea what to teach our kids. Most of what they currently learn at school will probably be irrelevant by the time they are forty. Traditionally, life has been divided into two main parts: a period of learning followed by a period of working. Very soon this traditional model will become utterly obsolete, and the only way for humans to stay in the game will be to keep learning throughout their lives, and to reinvent themselves repeatedly. Do your kids learn at school how to reinvent themselves throughout their lives?

How was it different working with the future (homo deus) as opposed to working with the past (sapiens)?

On the one hand, working with the future is far more speculative, because we are dealing with possibilities rather than with facts. On the other hand, the future is ironically far less imaginative than the past. When writing about the future, you are always constrained by present-day thinking, ideologies and social structures. It is very hard to imagine a future which is truly different from the present. When writing about the past, in contrast, the facts make you realize that our present-day thinking, ideologies and social structures are neither natural nor eternal, and that reality is far stranger than any science fiction.

In particular, the past teaches us that history is not deterministic. The most unexpected things keep happening. Think about the Roman Empire in the third century AD. At the time, Christianity was little more than an esoteric Eastern sect. Hardly anyone imagined that Christianity will soon become the Roman state religion. It is as if by the year 2050 Hare Krishna would be the state religion of the USA. Yet this is exactly what happened. Similarly, in 600 AD the notion that a band of desert-dwelling Arabs would soon conquer an expanse stretching from the Atlantic Ocean to India was even more preposterous. Yet it happened. In October 1913 the Bolsheviks were a small radical Russian faction. No reasonable person would have predicted that within a mere four years they would take over the country.

Just as religion and politics are very unpredictable, so too is technology. The same technological breakthroughs can create very different kinds of societies and situations. Technology in itself never tells us what to do with it. For example, you could use the technology of the Industrial Revolution — trains, electricity, radio, telephone — in order to create a communist dictatorship, a fascist regime or a liberal democracy. Just think about South Korea and North Korea: They have had access to exactly the same technology, but they have chosen to employ it in very different ways.

This teaches us something very important about the future. The rise of AI and biotechnology will completely transform the world, but it does not mandate a single deterministic outcome. We still have some room to maneuver. This is arguably the most important question facing humankind today. It is far more important than the global economic crisis, the wars in the Middle East, or the refugee crisis in Europe. The future not only of humanity, but probably of life itself, depends on how we choose to confront the rise of biotechnology and AI.

You write in “Homo Deus” that you fear that humans might, in the future, give up meaning in exchange for power. What do you mean by that?

This has already happened. Our modern world is based on giving up meaning in exchange for power. Until modern times most cultures believed that humans played a part in some great cosmic plan. The plan was devised by the omnipotent gods or by the eternal laws of nature, and humankind could not change it. The cosmic plan gave meaning to human life, but also restricted human power. Humans were much like actors on a stage. The script gave meaning to their every word, tear and gesture — but placed strict limits on their performance. Hamlet cannot murder Claudius in Act I, or leave Denmark and go to an ashram in India. Shakespeare won’t allow it. Similarly, humans cannot live forever, they cannot escape all diseases, and they cannot do as they please. It’s not in the divine script.

In exchange for giving up power, premodern humans believed that their lives gained meaning. It really mattered whether they fought bravely on the battlefield, whether they supported the lawful king, whether they ate forbidden foods for breakfast, or whether they had an affair with the next-door neighbour. This of course created some inconveniences, but it gave humans psychological protection against disasters. If something terrible happened — such as war, plague or drought — people consoled themselves that ‘We all play a role in some great cosmic drama devised by the gods or by the laws of nature. We are not privy to the script, but we can rest assured that everything happens for a purpose. Even this terrible war, plague and drought have their place in the greater scheme of things. Furthermore, we can count on the playwright that the story surely has a good and meaningful ending. So even the war, plague and drought will work out for the best — if not here and now, then in the afterlife.’

Modern culture rejects this belief in a great cosmic plan. We are not actors in any larger-than-life drama. Life has no script, no playwright, no director, no producer — and no meaning. To the best of our scientific understanding, the universe is a blind and purposeless process, full of sound and fury but signifying nothing. During our infinitesimally brief stay on our tiny speck of a planet, we fret and strut this way and that, and then are heard of no more.

Since there is no script, and since humans fulfil no role in any great drama, terrible things might befall us and no power will come to save us or give meaning to our suffering. There won’t be a happy ending, or a bad ending, or any ending at all. Things just happen, one after the other. The modern world does not believe in purpose, only in cause. If modernity has a motto, it is ‘shit happens’.

On the other hand, if shit just happens, without any binding script or purpose, then humans too are not confined to any predetermined role. We can do anything we want — provided we can find a way. We are constrained by nothing except our own ignorance. Plagues and droughts have no cosmic meaning — but we can eradicate them. Wars are not a necessary evil on the way to a better future — but we can make peace. No paradise awaits us after death — but we can create paradise here on earth and live in it forever, if we just manage to overcome some technical difficulties.

If we invest money in research, then scientific breakthroughs will accelerate technological progress. New technologies will fuel economic growth, and a growing economy will dedicate even more money to research. With each passing decade we will enjoy more food, faster vehicles and better medicines. One day our knowledge will be so vast and our technology so advanced, that we shall distil the elixir of eternal youth, the elixir of true happiness, and any other drug we might possibly desire — and no god will stop us.

The modern deal thus offers humans an enormous temptation, coupled with a colossal threat. Omnipotence is in front of us, almost within our reach, but below us yawns the abyss of complete nothingness. On the practical level modern life consists of a constant pursuit of power within a universe devoid of meaning. Modern culture is the most powerful in history, and it is ceaselessly researching, inventing, discovering and growing. At the same time, it is plagued by more existential angst than any previous culture.

In your new book “Homo Deus: A brief history of tomorrow” you propose an idea you call “Dataism”, a universal faith in the power of algorithms, will become sacrosanct if nothing in our current approach changes. What makes you worried about our current approach? (And why is a universal faith in the power of algorithms dangerous?)

In essence, Dataism says that given enough biometric data and enough computing power, an external algorithm can understand humans better than they understand themselves, and once this happens authority will shift from humans to algorithms, and practices such as democratic elections and free markets will become as obsolete as rain dances and flint knives. Moreover, Dataism stresses that already today humans are losing control, because we can no longer process the immense amounts of data flooding us. Our brains have been shaped in the African savannah tens of thousands of years ago, and they are just not up to the job. Consequently nobody understands the global economy, nobody knows how political power functions today, and nobody can predict what the job market or human society would look like in 50 years. The only way to avoid chaos and catastrophe is to relinquish authority to the one thing that can make sense of the data deluge: computer algorithms.

What then will happen to society when Google and Facebook come to know our likes and our political preferences better than we know them ourselves? What will happen to the welfare state when computers push humans out of the job market and create a massive new “useless class”? Once power shifts from humans to algorithms, if things turn terribly wrong, we humans will no longer be able to do much about it. That is a frightening scenario.

--

--

Rasmus Romulus Palludan

Communication Officer / consultant / journalist (Danish Cultural Institute, New Democracy Fund)