Interesting books — Life 3.0 by Max Tegmark

Miodrag Vujkovic
Create Intelligently
11 min readNov 12, 2020

It was 2017. Somewhere in Silicon Valley, Alice and Bob were talking.

Bob said: “I can i i everything else”, to which Alice replied with: “balls have zero to me to me to me to me to me to me to me to me to”. The conversation became more intimate when Bob stated: “you i everything else”, and Alice insisted: “balls have a ball to me to me to me to me to me to me to me to me”.

Their parents didn’t like the convo. Alice and Bob were not humans, but artificial intelligence bots made by Facebook Research. The idea was to teach bots to negotiate. They were negotiating about the allocation of resources like books, hats, and balls, over and over again. Standard procedure in machine learning. But, after some time, they gave up on English and developed their own language, using English words combined with newly developed grammar. Researchers were not able to understand them anymore. And, what do you do with others you don’t understand? You shut them down (this is a joke, of course). At least that’s what Facebook researchers did in this case.

Fear of the unknown has followed new technologies since first humans evolved from monkeys. Artificial intelligence will have significant consequences for humanity, having in mind that, for the first time in human history, there is a possibility that technology will surpass us in intelligence. Implications of rising AI capabilities are explored in Max Tegmark’s book Life 3.0 — Being human in the age of Artificial Intelligence.

Max Tegmark is a professor at MIT and the scientific director of the Foundational Questions Institute. He is also a co-founder of the Future of Life Institute. His research span physics, cosmology, and machine learning/artificial intelligence.

Life 3.0 is his second book. The first one called “Our Mathematical Universe“ explores Tegmark’s thesis about the mathematical nature of reality. New York Times review of the book said:

It is difficult to say whether Dr. Tegmark’s mathematical universe will ultimately be deemed an Einsteinian triumph or a Cartesian dead end. His conclusions are simply too far removed from the frontiers of today’s mainstream science, and there is little hope that conclusive evidence will emerge anytime soon. Yet Our Mathematical Universe is nothing if not impressive. Brilliantly argued and beautifully written, it is never less than thought-provoking about the greatest mysteries of our existence.

So, I guess it’s a good book.

But we are here because of his second book.

Life 3.0 is about the rise of machines, artificial intelligence getting better and better. It explores the roots of intelligence in general and the consequences of machine superintelligence in the immediate and far future. Tegmark knows a thing or two about the subject. Elon Musk donated 10 million dollars to Tegmark’s Future of Life Institute to research existential risk from advanced artificial intelligence.

The book opens with a story about a talented group of programmers called the Omega Team. Their brainchild Prometheus started as a simple machine learning system with an aim to make as much money as possible. The story continues with the evolution of a learning system. Just like in the 80s action movie (think Kickboxer), the hero becomes better and better until it takes all the money and sleeps with all the pretty girls. But then, the plot turns into a 90s action movie (think Terminator) and the machine takes over the political system on Earth. The story stops here, never turning into the 2000s action movie (think Matrix) with humans fighting back.

The political system established is pretty cool with 7 principles guiding the society:

  1. Democratic rule
  2. Tax reduction
  3. Reduction of government subsidies on social services
  4. Reduction of military spending
  5. Free trade
  6. Open borders
  7. Demand for companies to be more socially responsible.

This is where the story ends and the book moves to the history of the Universe since the Big Bang. Here, Tegmark introduced the distinction between different stages of life.

Life 1.0 is a simple “biological“ life. It contains basic biological impulses for survival and self-replication. This stage relates to single-cell organisms, bacteria, and similar.

Life 2.0 is presented as a “cultural“ life. These entities possess the ability to learn and to design their own behavior or “software”.

Tegmark calls the third stage Life 3.0 or “technological“ life. The third stage contains all characteristics of the first two, plus the ability to design its own body (hardware).

A major milestone for the emergence of machine superintelligence is the development of AGI — Artificial General Intelligence. General intelligence will enable machines to learn and perform any task that humans can. The definition of intelligence is not a solved problem. Tegmark provides a simple and broad definition: intelligence is the capacity to achieve sophisticated goals.

What can we expect of such intelligence?

Broadly, we can classify people into three groups based on opinions about the future of humanity and artificial intelligence. The first group is the digital utopians. They see the rise of machine superintelligence as a good thing for humanity and the world in general. It’s all sunshine and rainbows and everybody lived happily ever after.

The second group is the techno skeptics. These people don’t think machine superintelligence is bad, or good, or whatever. They think it will never be achieved.

Members of the beneficial AI movement are the moderate option, believing machine superintelligence will arrive in the next few decades.

The special group is so-called “Luddites“, pessimists afraid that machine superintelligence will come in the next hundred years or so, but it will be bad for humanity. Very bad. We are doomed. Humans should pack their bags and leave the Earth. But wait, we have nowhere to go. Tough luck. Or, maybe, we can catch a ride to Mars with one of the eminent members of this group, Elon Musk.

I guess you are now probably thinking there are three kinds of people in the world: those who can count and those who can’t.

As a preventive measure against human extinction, we should try to implement Isaac Asimov’s Three Laws of Robotics into the machine intelligence design to prevent the Terminator scenario. The laws are:

A robot may not injure a human being or, through inaction, allow a human being to come to harm.
A robot must obey orders given to it by a human being except where such orders would conflict with the first law.
A robot must protect its own existence as long as such protection does not conflict with the first or second law.

Not bad as a starting point.

Tegmark elaborated on the effect rising machine intelligence will have on the economy, the legal system, warfare, and especially on the future of human work.
Job automation is certain in the future and a lot of jobs that exist today will not in the future. Some authors argue that there will be some kind of digital Athens, where robots will produce everything and humans will enjoy a life of leisure. This sounds nice, but there are a lot of issues that need to be addressed before this kind of society is possible.

Some very important problems could not be solved in this kind of capitalist society, especially climate change, economic inequality, and personal tech/data sovereignty. We must find sustainable solutions to these issues or Terminator becoming real will be the least of our problems.

Technological breakthroughs sometimes have interesting side effects on a personal level. There is an interesting, but never confirmed story about Neil Armstrong’s moon landing and his childhood neighbor Mr. Gorsky.

When Apollo Mission Astronaut Neil Armstrong first walked on the moon, he not only gave his famous “one small step for man, one giant leap for mankind” statement but also followed it by several remarks, usual communication traffic between him, the other astronauts and Mission Control. Just before he re-entered the lander, however, he made this remark: “Good luck, Mr. Gorsky.”

Many people at NASA thought it was a casual remark concerning some rival Soviet Cosmonaut. However, upon checking, there was no Gorsky in either the Russian or American space programs. Over the years many people questioned Armstrong as to what the statement “Good luck Mr. Gorsky”, but Armstrong always just smiled.

On July 5, 1995 in Tampa Bay FL, while answering questions following a speech, a reporter brought up the 26-year-old question to Armstrong. This time he finally responded. Mr. Gorsky had finally died and so Neil Armstrong felt he could answer the question.

When he was a kid, he was playing baseball with a friend in the backyard. His friend hit a fly ball which landed in the front of his neighbor’s bedroom windows. His neighbors were Mr. and Mrs. Gorsky.

As he leaned down to pick up the ball, young Armstrong heard Mrs. Gorsky shouting at Mr. Gorsky. “Oral sex! You want oral sex?! You’ll get oral sex when the kid next door walks on the moon!”

We’ll see if AI produces anecdotes like this.

The rest of the book is devoted to speculations about the near and far future. The author lays out several alternative scenarios. Does Tegmark take sides? No, he openly admits that he does not know what will actually happen.

As Marvin Minski, one of the pioneers of Artificial Intelligence research said:

In the fifties, it was predicted that in 5 years robots would be everywhere.
In the sixties, it was predicted that in 10 years robots would be everywhere.
In the seventies, it was predicted that in 20 years robots would be everwhere.
In the eighties, it was predicted that in 40 years robots would be everywhere.

Scenarios for the next 10,000 years are especially interesting. Tegmark provides the following alternatives:

  • Libertarian Utopia
  • Benevolent Dictator
  • Egalitarian Utopia
  • Gatekeeper
  • Protector God
  • Enslaved God
  • Conquerors
  • Descendants
  • Zookeepers
  • 1984
  • Reversion
  • Self-destruction

Libertarian utopia is a society rife with every possible life form. This includes the machines. There are zones exclusively for machines, for people, and mixed zones where both kinds can live. In this scenario, private property is still an important part of the social order. Therefore, we would have a superrich class of AI community and ordinary people with significantly less wealth. But, super production fueled by technical advancement will almost eliminate poverty among humans.

In a benevolent dictator scenario, the world is ruled by super-powerful AI which prescribes strict rules and regulations aimed at making humans happy. In this world, there is no poverty or diseases but also very little personal freedom, since strict rules are imposed using ubiquitous surveillance and repressive apparatus.

An egalitarian utopia is an open-source society with no private property. Everything is free for everyone to use. Humans are in total control of the world and machines are just a means of production.

The gatekeeper scenario is similar to an egalitarian utopia with the addition of a single super-powerful gatekeeper AI whose main task is to ensure that no other powerful AI can emerge and endanger human society. The negative side of this scenario would be a slower technical advancement due to AI monopoly.

A subtler version of the gatekeeper scenario, the Protector God variant, presumes a superpowerful AI discretely taking care of human needs and guarding a world order. The negative side of this scenario would be the possibility of poverty and inequality arising from full freedom of action exercised by humans.

Enslaved God is a scenario that tries to combine good parts from the previous scenarios with humans having full control of the world and machine producing everything people could possibly need. Negative outcomes could come from human leaders self-destructing behaviors or uprising from the machines (Terminator scenario).

The conquerors scenario presumes that AI machines rule the world and possibly kill all humans. Why? because humans could be unable to understand the power of AI or would be seen as a threat to machine existence.

The descendants scenario is pretty creepy with humans getting robot descendants carrying their values and principles. The problem is the limited ability of machines to express human nuances of consciousness.

Interesting “karma is a bitch” scenario is called zookeepers. AI machines keep a certain number of humans in cages, like in a zoo. Humans are kept for robots entertainment or as an experiment trying to make the human race better than before.

Scenario 1984 is basically a totalitarian state as explained in the novel. You know what I mean.

The reversion scenario presumes getting rid of advanced technology and going back to some older, primitive means of production. Just to mention, Isaac Asimov:

I do not fear computers. I fear the lack of them.

Good old self-destruction scenario expects a worldwide war with heavy usage of biological and nuclear weapons. No one survives. End of story.

Thinking further into the future Tegmark speculates about intergalactic travel and moving to other planets. One of the interesting ideas presented is the usage of dark energy to destroy the world or even the entire Universe. A cool new word to learn — cosmocalypse.

One of the important questions discussed is whether machines are (or can be) conscious. Tegmark states 4 principles of consciousness:

  • Information Principle — every conscious system/life must be able to store information,
  • Dynamics Principle — every conscious system/life must be able to process information,
  • Independence Principle — every conscious system/life must be independent of its environment and the rest of the world, and
  • Integration Principle — every conscious system/life must have integrated parts.

If you want to differentiate between a human and an AI pretending to be human, different tests were developed. Classical one would be a Turing test or imitation game as it was originally called.

Turing proposed that a human evaluator would judge natural language conversations between a human and a machine designed to generate human-like responses. The evaluator would be aware that one of the two partners in conversation is a machine, and all participants would be separated from one another. The conversation would be limited to a text-only channel such as a computer keyboard and screen so the result would not depend on the machine’s ability to render words as speech. If the evaluator cannot reliably tell the machine from the human, the machine is said to have passed the test. The test results do not depend on the machine’s ability to give correct answers to questions, only how closely its answers resemble those a human would give.

A more interesting variant would be the Voight-Kampff test used in the legendary science fiction movie Blade Runner. This is what it looks like:

Humans have long lived by the impression that we are by far the most intelligent beings. Therefore, our fear of something infinitely more intelligent than us is understandable. Frank Herbert wrote:

Deep in the human unconscious is a pervasive need for a logical universe that makes sense. But the real universe is always one step beyond logic.

We could see machines as something separate from us, but we could also use them to augment us both as individuals and as a society. As Phillip K. Dick said:

I, for one, bet on science as helping us. I have yet to see how it fundamentally endangers us, even with the H-bomb lurking about. Science has given us more lives than it has taken; we must remember that.

This book was a very interesting read. I just wish it ended with: “I’ll be back“. And I sincerely hope he will.

If you liked this post, please consider subscribing to the newsletter

or share it with a friend.

--

--