Member preview

— Will consciousness come along for the ride? — (photo from

Goal-Setting from First Principles and Consciousness: Book Review of Life 3.0 by Max Tegmark

Max Tegmark shed tears after emerging from a London science museum in 2014, but by early 2017, his heart had warmed.

In Life 3.0: Being Human in the Age of AI, Tegmark explains his concern that all might be lost if AI security issues are not taken seriously.

Between 2014 and 2017, Tegmark spearheaded a movement and community of aligned AI researchers and secured a $10-million investment from Elon Musk.

By taking a optimistic mindset based on taking action to solve problems, Tegmark had reduced his former fatalistic view to the backburner. Heat death might be the universe’s goal, but humans, and AI, need not be subservient to that original universal goal as determined by the laws of physics.

I am an amateur enthusiast of futuristic visions of AI, including AGI (artificial general intelligence), so Tegmark’s work was a very pleasurable read for several reasons:

  1. Tegmark provided a fictionalized vision of how an AGI, if managed by a team of humans, could take over the world (what the humans do with the power they attain is another matter)
  2. A wealth of knowledge from peers, collaborators, and detractors provides meat for arguments and fervent discussion (even as the book is written in linear prose)
  3. Tegmark advocates AI security, explores the thorny issue of goal-setting in AI, and is a proponent of non-dystopian visions of human and AI futures

This is a highly accessible read even if your musings on AI have only reached for the implications of self-driving cars in your commute. (Who should pay the insurance bill? Tegmark suggests the cars themselves might.)

Goal-Setting from First Principles

As a professor of physics and president of the Future of Life Institute, Tegmark makes the claim that creating an infrastructure and adoption of AI security will ensure humans build friendly AI, that is, AI with human-aligned goals.

I appreciated the chapter on goal-setting, which clearly established that human goal-setting (and free will) are not to be overestimated, and in a similar way we should not expect AI to occlude itself from ‘breaking out’, or finding a better way to achieve its goals without the need for humans. One step further, AI and AGI may well, with increased intelligence to our own, decide that its originally-programmed goals are flawed in some meaningful way, and change them.

One metaphor for this is the ‘trigger happy mouse trap’, which is bound by its intelligence so as to be unable to differentiate a rodent victim from a human toe.

I also found great value in the discussion of human feelings as heuristics for decision-making. In other words, feelings can override the goals inherent to our genes, because our brains are smarter than our genes.

Towards a Long-Lasting Universe WITH Consciousness

Tegmark also asks the reader to consider a world or a universe without consciousness. (The segment on consciousness as a substrate-independent physical phenomena was quite instructional, despite its radical implications.)

For Tegmark, a world without consciousness broadly means a world without subjective experiences. We could create AGI that supersedes and eliminates humans, but if AGI is not conscious, then we humans will have wasted an opportunity to expand the experiential scope of the universe. (The discussion of the pretty hard problem of consciousness, the even harder problem of consciousness, and the really hard problem of consciousness as relating to David Chalmers’ work is a great segment).

This tragedy would be even worse considering the possibility that humans may be the most intelligent life forms in our reachable neighborhood of the universe.

This book was inspiring for its survey of the field and its cross-disciplinary contributors, from philosophers to physicists, but also for its pragmatically optimistic viewpoint on approaching the discourse and the work to be done. An easy 5 stars.

I also loved that Tegmark relayed the story of how he brought unity and energy to eventually (yet quite quickly) create real work and experiments and entire teams working on safety and security of AI.

If Life 3.0 — whereby the hardware and software of life can be designed — is to be achieved in our lifetimes, then the more we humans can decide what we want this world to look like, the better.

See you again very soon for a book review of Behave: The Biology of Humans at Our Best and Worst by Robert M Sapolsky. I will put this book into conversation with the goals chapter of Life 3.0: On Being Human in the Age of AI.