A Conversation with Michael Shermer about Trump, Transhumanism, and the Future of Our Species

This year marks the 25th anniversary of Skeptic magazine, founded back in 1992 by the intellectual maverick Michael Shermer to combat the spread of superstition and pseudoscience — a problem that is, one might have noticed, still with us today. It’s also the 20th anniversary of Shermer’s book Why People Believe Weird Things, which tackles a wide range of topics, including Ayn Rand’s Objectivism, creationism, and Holocaust denialism. And this fall, Shermer will publish his 200th consecutive monthly “Skeptic” column for Scientific American.

Given this momentous occasion — not to mention the fact that Shermer is a Fellow at my own X-Risks Institute — I reached out to Shermer with a few questions about a variety of topics, from Donald Trump becoming president to the transhumanist goal of attaining immortality. The result was a fascinating glimpse into the mind of one of the most influential public intellectuals of the past three decades.

In your book The Moral Arc, you convincingly argue that our circles of moral concern have expanded over time, resulting in an unambiguous transhistorical trend of moral progress. What do you think about the rise of Donald Trump, which was fueled in part by the racist and misogynistic alt-right movement? Is Trump a mere blip that, like a ripple on the ocean, will have no appreciable effect on the tides? Or could his rise to political power and authoritarian leadership style potentially undercut your thesis of moral progress?

The Trump phenomenon will be talked about for decades to come. He’s an anomaly in every way. I don’t claim to have any special insight into the matter, but I very much doubt that he’s going to seriously harm the country, much less bring about the end of the world. There will be ups and downs. The market is way up at the moment and my 401K is very happy about that. But I have grave concerns about his immigration policy, probably his foreign policy (although that remains to be seen at this time), and his economic policy, although he seems to be shifting from pure economic nationalism back to keeping our trade agreements, which is good for the long haul. The alt-right movement was part of his success, but I don’t think a very big part. Many millions of America are just sick and tired of politics as usual, particularly the regressive left’s so-called “progressive” policies that even centrists can see are unproductive, and in many cases quite destructive — witness the campus left and their shenanigans with protests, violence, shutting down of speakers, safe spaces, trigger warnings, microaggressions, and the like. Any reasonable person, even someone who would never vote for Trump, seeing that just shakes their head in disgust and pulls the handle for the side against that, which carries a Republican button on it. In the long run, however, I predict that whatever happens in the next 4 (or 8) years will have little effect in slowing the overall trend of moral progress. It has always been three steps forward and two back. We will carry on. As Adam Smith once replied to someone who asked him about the possible ruinous effects of some policy, “there’s a lot of ruin in a nation.”

What do you see as the biggest challenge that civilization will have to face in the foreseeable future — say, in the next 50 to 100 years?

In no particular order: getting to nuclear zero (the problem of trust in a game-theoretic analysis is a substantial one), terrorism, climate change, species extinction and environmental degradation, artificial intelligence, the rights of robots, the rights of animals (all sentient beings), the transition from the nation state to smaller centers of power (and thus less potentially susceptible to destructive ideologies like nationalism), full employment in the teeth of automation and AI, and a few others. I don’t think any of these are insoluble, but some are more challenging than others and will have different long-term effects. Of course, 50 to 100 years is a long time. Had you asked this of me (or anyone else) in 1950 who would have predicted the fall of the Soviet Union, the Internet, self-driving vehicles, computers that fit inside your pocket or on your wrist, etc.? So…who knows?

Do you believe that emerging technologies like synthetic biology, nanotechnology, and artificial intelligence could fulfill the transhumanist dream of enabling humans to live indefinitely long lives? Is there a real possibility that some people alive today could live for hundreds or thousands — or hundreds of thousands — of years? Are there any good moral objections to altering our human phenotypes?

No, not with the present science and technology we have. I address these issues in several chapter-length answers in my next book: Heavens on Earth: The Scientific Search for the Afterlife, Immortality, and Utopia. There are substantial hurdles to living well beyond the current upper ceiling of around 125 years. It may very well be an engineering problem to solve, but like Artificial Intelligence it is a much harder one than anyone anticipated. I do reject the arguments that some people make that it isn’t “natural” to live for centuries or millennia, or that they themselves would want it. Baloney. I refute that in my book by asking readers if they got a death diagnosis from their doctor that they would die in a week, would they want an extra week? Of course they would? How about an extra month? Certainly. An extra year? Definitely. Why not a decade? Why not indeed? I’ll take it! And so on. No matter how far out into the future you go with the death date, assuming one is relatively healthy and not severely depressed, most people would want one more week, month, year, decade….

That said, I think these are the wrong goals at which to aim. It is really quality of life, not quantity, we should be concerned with, and to that end I am extremely optimistic that synthetic biology, nanotechnology, AI, and all the rest will make our lives substantially better. Even more, economic prosperity will achieve levels never seen in history. Economists are predicting that more wealth will be generated in the 21st century than in all previous centuries combined, which is what has already happened in the 19th and 20th centuries. It is an accelerating growth curve. Poverty will be extinct by 2030 to 2040. By 2100 we will be living in a post-scarcity world. Some call this Trekonomics, as in the Star Trek world of the 23rd century when everyone has a “replicator” to give them whatever they need. We will have replicators long before then. That will change everything, and mostly for the good.

Many scholars, including Stephen Hawking, have claimed that space colonization is humanity’s only chance for long-term survival in this hostile and morally indifferent universe. This appears unarguably true with respect to the fate of Earth, which will someday be swallowed whole by the sun as it becomes a bloated red giant. But Hawking believes that human civilization is could self-destruct in the next century or so and that this makes space colonization an urgent goal for our species. What are your thoughts on this? Is colonizing space important in the short term? Or would we merely carry our problems with us into the firmament?

Colonizing space is a good thing to do for multiple reasons, but I think the fear of self-immolation via nuclear war, climate change, AI, etc. is the least important reason to do so (well, I suppose the earth being swallowed by the sun billions of years from now would be lower still). We should colonize Mars and go elsewhere because that is what we do as a species, ever since we migrated out of Africa and spread throughout the globe. The benefits of doing so go far beyond the invention of Teflon or Tang (byproducts of the 1960s space race). We cannot predict what all those benefits will be, but they are going to change the world in mostly positive ways. I just don’t see a downside, short of bankrupting every nation on earth, which isn’t going to happen. We can have rockets and butter both! (But fewer guns in the long run would be good.)

This paints a rather optimistic view of the future — one shared, of course, by many other leading thinkers (such as Ray Kurzweil and Peter Diamandis). But others appear quite worried about what the future might hold. For example, the venerable Bulletin of the Atomic Scientists recently moved the minute hand of the Doomsday Clock forward by 30 seconds as a direct result of Trump becoming president. The reason is that, according to the Bulletin, Trump will impede progress on both curbing climate change (he’s a climate denier) and slowing nuclear proliferation, and these phenomena constitute clear and present dangers to global civilization. Do you agree with the Bulletin’s assessment and decisions? Are such fears overblown?

I am highly skeptical of the Doomsday Clock and have had discussions about it with their director, Lawrence Krauss. I’ve known Lawrence for decades and he’s a good friend, but on this issue we disagree. First, what is the clock actually measuring? Nothing. It’s a metaphor. Fine, but why set it so close to doomsday as if we could all go extinct at any moment? Who thinks like that? Almost no one. I suggest that they move the clock to noon so they have room to move the hands forward and back by many minutes or hours to reflect real changes in policy, such as those related to climate change or nuclear weapons. We should not be Pollyannaish about matters, but neither need we be Cassandras. There’s a balance, and I think the Bulletin of the Atomic Scientists are too far from the center of that balance.

Yet there is a growing community of scholars working on the topic of existential risks — i.e., worst-case scenarios that would “cause the loss of a large fraction of expected value” (where this expected value is astronomical). Do you think this scholarship is guilty of alarmism? Are there any fundamental disagreements between you and those who believe that existential risks constitute an urgent topic of neglected research?

I am grateful there are smart people such as yourself, Stephen Hawking, Elon Musk, Martin Rees, John Leslie, Richard Posner, and others who do these calculations. And who knows, maybe the very act of thinking about it will prevent it. But for the most part all such calculations are based on “if-then” scenarios, the further out you get the less certain becomes each step in the chain. “If this happens, and then that happens, and then this and that happen, and then this, that, and the other happen….” 20 links in the logic chain later you arrive at doomsday. All doomsayers do it. The problem is that by step 10 or so the probabilities are completely unknown so we’re all just guessing at that point. Then, if you make doomsday predictions, I would say you’d be guilty of alarmism.

Some have suggested that the best way to make a fool of oneself is to make predictions about the future. This being said, are there any weak predictions that you would be comfortable making about how this great experiment called civilization might turn out?

In the final chapter of my book The Moral Arc, titled “Protopia” (Kevin Kelly’s neologism for gradual progress that is neither utopian or dystopian), I project into the long-term future that includes not just the colonization of the solar system, but ultimately of other solar systems, the entire galaxy, and other galaxies. We’re talking millions of years in the future, but it is doable on a long time scale. Then we will once again branch off into multiple hominin species, as we once were in Africa between 6 million years ago when we diverged from the common ancestor with chimpanzees, and 40,000 to 24,000 years ago when Neanderthals went extinct, leaving us the last hominin on earth. Planets will act as reproductive isolating mechanisms — like islands and continents have on earth — and so human space explorers will act as founding populations starting new species. It will be epic. If only we could live to see it unfold…