Max Tegmark’s “Life 3.0” and Social Governance around AI

dr. anastassia lauterbach
9 min readOct 16, 2017

--

In this essay I discuss Max Tegmark’s recent book “Life 3.0” and introduce a concept of Social Governance to actively address AI design and safety.

Artificial Intelligence will probably be the most important factor to change economy, culture, politics and even human health since Internet. Deciding on how to build it in the most safe and beneficial way might be the most important choice humankind makes in the coming decades. Unfortunately there is lack of transparency, knowledge and collaborative tools to join discussion around AI at all level of society, from traditional companies to municipalities, from schools to the boardrooms. This has to change. Tegmark rightly emphasizes unforeseen consequences of developing Artificial Super Intelligence. The biggest peril of AI is that machines become intelligent without sharing our goals.

Introduction

Some time ago I started wondering how big societal questions impact leadership and governance in companies and society. As an example we might think about how corporate America, its municipalities, educational institutions and non-profits stepped up after Trump pulled his country out the Paris Climate Agreement. Businesses, city mayors and their administrations declared their commitment to comply with goals of this Agreement, no matter what happened in the White House. Exchanging ideas and sharing best practices became their priority. People within these groups exhibited leadership because they believed it was the right thing to do. But these actions did not arise out of nowhere. For decades, environmental risks have been studied, translated into technologies and business frameworks, transformed into policies, and explained and communicated to the public. Open discussions have been carried out on whether humanity has already missed the point of no return.

This example gives me optimism for more progress, and not only in the field of climate change. There is another elephant in the room: I am talking about Artificial Intelligence.

The United Nations has finally created a unit dedicated to AI in the Netherlands, ultimately acknowledging that we should treat it as a discipline like economics or the environment rather than a bundle of loosely connected topics. Alphabet’s Deep Mind has just opened its ethics board to outsiders from research and industries, willing to increase transparency over its technologies and their implementation. These two events are encouraging, but not nearly enough to develop an informed, scientifically grounded and forward looking view on what we expect from AI.

Max Tegmark is a professor of physics at MIT, President and co-founder of The Future of Life Institute. His research lies at the interface between AI, physics and neuroscience. “Life 3.0,” his recently published book, is a treasure of information on the nature of intelligence, offering a rather unusual but very powerful non-biological definition of life. Tegmark calls for an active, forward-looking approach to address AI. This is because he fundamentally believes that the future is not something predestined. It is full of wonderful possibilities and humans can actively create it.

Life and the Emergence of AI

Most definitions of life involve biological features, e.g. having cells. For a physicist like Tegmark, there are no secrets in cells or carbon atoms. Life is all about information processing and the capability to reproduce without relinquishing complexity. Biological life meets this definition, but there is no reason why future advanced AI systems shouldn’t qualify as well. Tegmark talks about the three stages of life:

· Life 1.0 is unable to redesign either its hardware or its software during its lifetime. Both are determined by DNA and change only through evolution over multiple generations. Bacteria and primitive organisms are examples of this “Life” form. Successful as they might be, these organisms do not learn anything in life.

· Life 2.0 can redesign much of its software, as humans can learn complex new skills such as languages, sports and abilities, thereby fundamentally updated their worldview and goals. People like readers of this essay are examples of this “Life” form. Humans can reinstall new software, e.g. going to the law school to become a lawyer, learning a new foreign language etc. Though we do design pace makers, we can’t substantially upgrade our memory by adding new hardware to our brain.

· Life 3.0, which does not yet exist on Earth, can dramatically redesign not only its software, but its hardware as well, rather than having to wait for it to gradually evolve over generations. This “Life” could arrive during the 21st Century. The first signs of Life 3.0 can be seen in narrow AI applications; the greater upgrade will happen, however, once AGI takes further concrete shape.

The analogy of computer hardware and software applies to biological organisms. According to Tegmark, a bundle of stuff or blub (i.e. hardware) may or may not be intelligent. What makes stuff intelligent is a pattern. The software of life is in charge of organizing blubs into patterns.

Tegmark guesses that minds are independent from their substrate. It resonates with Ray Kurzweil, who hopes to upload his mind into a machine one day. It echoes with “The Age of Em,” a trans-humanist view on our future written by Robin Hanson.

Multiple companies in Natural Language Processing and Computer Vision ensure that we are already in the presence of multiple varieties of intelligence surpassing human capabilities in doing mathematics, translating languages, or seeing in darkness. However, just like an Olympic champion in gymnastics cannot take a medal in a ski race, we do not have machines that can “do it all” — play better than us, drive better than us, or compute equations better than us. Still, Tegmark believes that as soon as AGI arrives, we will immediately move to an Artificial Super Intelligence where machines will make machines. Before this happens, however, humans need to figure out the goal and value-alignment dilemma.

The biggest peril of AI is that machines become intelligent without sharing our goals. As a result, investing money and talent into AI safety research has to become our top priority.

ASI Scenarios

Like Ray Kurzweil, Tegmark believes in exponential growth, which will eventually lead to AGI and ASI.

I personally think we require scaled quantum computing and a very different approach to building software to achieve AGI. Imitating the human brain might not be the best way. According to Yann LeCun, we currently see only five percent of what an AI could potentially do. Yet this does not stop me from firmly believing that planning for more AI is necessary. As Tegmark himself remarks in his podcast for the Future of Life Institute, people buy home insurance despite the very low probability that something bad will happen to damages or destroy their property. Why should we not exercise similar caution, even if there is a small chance AGI and ASI might get built one day? We would need a lot of time to prepare for this event if we want to have a say in what our future may look like.

Once AGI and ASI arrive, there are three general paths to what our future might look like:

1) Humans can solve the value and goal alignment problems with machines. They will coexist with them, and likely and ultimately ‘box’, or jail. an AI;

2) Humans would merge with AI technology in a kind of a cyborg scenario;

3) Humans would be usurped by robots as over-lords.

’Breaking Out’ Scenario

There is an interesting thought experiment in the book. Tegmark describes a company which produces the world’s first Super (Human) General Intelligence, and monopolizes this field for its (or rather for what we learn later on, it’s AI purposes). What does this “the winner takes all” scenario look like?

Interestingly, once there, the ASI first enters journalism. From there it spreads, manipulating the decision making of businesses, governments and citizens. At some point, the ASI breaks out, identifying weak spots in the personal data of one of its human supervisors. Without spoiling the story for “Life 3.0” readers, the experiment demonstrates the limitations of unplugging a smart machine.

The Cyborg-Scenario

Garry Kasparov, who lost to IBM’s Deep Blue in 1997, came to realize that the ultimate intelligence was neither machine nor technological. In 2010 he suggested, “What if instead of human versus machine, we played as partners?” Earlier this year PARC researcher Mark Stefik popularized the term “centaur” to describe these human-machine pairs. Tegmark, like Sam Harris and other researchers, believe a cyber-scenario might be inherently unstable. Smart machines will take other smart machines as partners, and build even more intelligent machines. They would not require a human to evolve.

Goal Alignment Scenario

In a breakout scenario there are two schools of thoughts. One is to lock the AI up, confining it. Another school suggests this would be immoral, as machines might have subjective experiences (or consciousness), and therefore should be free. The precondition of this freedom is that machines value our goals. Goal alignment is a difficult design task, as computers have to navigate nuances of context, emotions and individual style of different people and culture.

While reading about goal alignment, I was thinking about multiple biases in current data sets and algorithms. The values of the engineers building today’s narrow AI are reflected in the solutions they bring to the table.

Safety Design and Social Governance

There are examples in history where people implemented safety engineering built in from the beginning. When NASA sent Neil Armstrong, Michael Collins and Edwin Aldrin to the Moon, they succeeded not because of luck but because of previous systematic planning. Technological capabilities combined with human wisdom on how to best implement made their mission successful. This was a story on leadership and goal oriented problem solving.

Since AGI and ASI are decades away, most people might think there is no need to worry about implications at this point in time. Nevertheless, even narrow AI brings challenges to life and societal order as we know it. Countless publications quote job losses, a need to adopt a Universal Basic Income to cover necessities, and the benefits of offering life-long educational opportunities to escape stagnation and large inequalities. According to Tegmark, if humanity wants to win the race for safe and beneficial AI, we need to fund AI safety research today.

I believe frameworks for Social Governance for AI need to emerge. Not thinking about governance while designing technology will not only result in bad governance, but also in a lack of governance, according to my friend, an AI and blockchain entrepreneur Trent McConaghy.

To achieve Social Governance for AI, technology players have to increase transparency over their areas of expertise. I believe that establishing an Ethics Advisory Board should be a rule rather than an exception. Traditional businesses should consider AI within their sustainability frameworks. At the end, their competitiveness depends on how AI-agile they become. If not, very distant future products could get designed in an AI-centric way. New data market infrastructure has to emerge to guarantee that AI knowledge and capabilities are not limited to just a handful of players. Municipalities need to take a prominent role in an AI discussion and have talent in place to ensure AI is used to make cities more livable than is the case today in so many geographies. Last but not least, education and research facilities need to receive funding to attract talent for AI safety research and for designing programs for life-long education.

Last Words

In my eyes, Tegmark wrote a truly remarkable book for current and future leaders. The most interesting question to him is not what kind of ASI scenario is most likely to happen, but what kind of ASI scenario we actually want happening. We need to ask ourselves what kind of life would we like to have in the future, and only when this is clear, we can steer into this direction.

Sources

Kate Brodock, “Why we Desperately Need Women to Design AI”, Medium, August 6, 2017.

Ariel Conn and Max Tegmark, “Life 3.0. Being Human in the Age of AI”, Podcast of the Future of Life Institute, August 29, 2017.

Ariel Conn and Robin Hanson, “On the Age of Em”, Podcast of the Future of Life Institute, September 28, 2016.

Ariel Conn, “AI: The Challenge to Keep it Safe”, futureoflife.org, September 12, 2017.

Robin Hanson, “The Age of EM. Work, Love and Life when Robots Rule the Earth”, Oxford University Press, 2016.

Sam Harris and Max Tegmark, “The Future of Intelligence”, Podcast “Waking Up”, August 29, 2017.

Garry Kasparov, “Deep Thinking. Where Machine Intelligence Ends and Human Creativity Begins”, John Murray, 2017.

Ray Kurzweil, “The Singularity is Near. When Humans Transcend Biology”, Penguin Books, 2006.

Ray Kurzweil, “How to Create a Mind. The Secret of Human Thought Revealed”, Penguin Books, 2013.

Max Tegmark, “Life 3.0. Being Human in the Age of Artificial Intelligence”, Allen Lane, 2017.

Rowan Trollope, „AI and Our Kids: Raising Centaurs“, Medium, October 11, 2017.

--

--

dr. anastassia lauterbach

Tech. Enterpreneur, Board Member and Angel Investor. AI, Cybersecurity, IoT. NED @ D&B. Previously SVP Qualcomm & DT; Roles @ McKinsey, Daimler and Munich Re.