Image downloaded from shutterstock

My top three Fermi Paradox solutions

Serafim Batzoglou
Mission.org
Published in
8 min readJun 25, 2019

--

Where are all the aliens?

The Fermi Paradox, in a nutshell, goes as follows: Earth is 4.6 billion years old and life is about 4.3 billion years old; there are 200 billion stars in the Milky Way, which is about 10 billion years old and 100 thousand light years in diameter. If a civilization achieves interstellar travel at 2% the speed of light —about thirty two times faster than the maximum projected speed of the Parker Solar Probe — and starts an exponential process of sending robots across the galaxy to build factories to send more robots, it can explore/colonize the entire galaxy in about 10 million years. If civilizations much older than us exist and some of them think like Elon Musk, the galaxy should be already colonized but we see no sign of intelligent life in our solar system or anywhere. So, where are all the aliens?

The Fermi Paradox has been discussed extensively, with hundreds of proposed explanations and their variants. At first, the entire argument sounds silly. However, it is reasonable to expect that within the next 1000 years (perhaps 10,000, perhaps much sooner) we will have the technology to send self-replicating robots or even seeds of colonies across stellar systems. The exact speed of travel doesn’t matter: 10% of the speed of light is great, but 1% will do. In fact, the $100M Breakthrough Starshot initiative by Stephen Hawking, Yuri Milner and Mark Zuckerberg aims to develop ultra-light probes accelerated by light beams at 20% the speed of light to reach and explore nearby star systems. The time it takes to build a colony or factory and send more probes does not make a big difference. Maybe some civilizations are not expansionistic, but even if a few of them are, the argument holds. And there are excellent reasons to expand: the first civilization to conquer the galaxy, our local galaxy group, or even our supercluster, will be in a position to eliminate threats or send alerts across the network at the speed of light, providing eons of early warning. It is hard to imagine even of a peaceful, inner looking civilization not worried of potential rival aggressive civilizations. Our local Virgo supercluster is about 100 million light years across and part of the larger Laniakea supercluster, which encompasses 100 quadrillion stars and is 500 million light years in diameter. It is far fetched but conceivable that an ancient civilization born over 5 billion years ago in one of these 100 quadrillion stars or their predecessors is today close to completing an exploration with scouting drones across the entire supercluster.

Photo by NASA on Unsplash

It is not easy to dismiss the Fermi paradox. Either technological civilizations or even life are extremely unlikely or too short-lived, or some exotic theory is the case, of which there are many. For instance, aliens may have already conquered the galaxy and are among us, or they retreated in some digital simulated life rather than venturing in space, or perhaps we live in a computer simulation with Earth being the only simulated planet with life. Here, just for the record I briefly go over three I find most likely (at least today).

One important requirement for an explanation is that it should not rely on undue assumptions on other civilizations. “Other civilizations don’t wish to expand like we do” won’t do: some of them may not, but some may. It only takes one civilization with similar mentality to ours to expand across the galaxy. If we are the first, and if we don’t destroy ourselves, most likely we will eventually expand.

So here are my top three explanations:

A technological great filter lies ahead of us. I find it likely that science and technology require a certain free spirit of innovation, exploration, and individuality. Across our history, the leverage of an individual to cause damage is increasing steadily. Back 10,000 years ago, a strong and mean individual could kill a few people and bring down a hut or two. Today, a bad actor can cause much more harm. What if there is a technology that will unavoidably be invented, which gives the ability to anyone to instantly and irreversibly destroy the civilization? For example, an exotic and easily tapped energy source, or downloadable code for grey goo. If such a technology inexorably lies ahead of us, which is plausible, it is difficult to imagine how we could prevent every single individual from deploying it. How about other civilizations, could a collectivist civilization akin to an ant colony avoid such doom? Brains are expensive; in a collectivist civilization that confers no evolutionary advantage to individual intelligence, “free-riders” will get rid of their brains, so it is conceivable that every technological civilization consists of competing individuals and in every single one of them one individual eventually and inexorably triggers the doomsday machine. One catch to this explanation: for “best results” the doomsday machine must be triggered before exponential space exploration commences.

Aliens are among us. The first civilization to develop space travel, if similar to us in mindset, will likely want to expand at least defensively across the galaxy and beyond. If nothing else, to prevent future aggressor civilizations from expanding. Or perhaps because it is aware of destructive abilities of even inferior civilizations (think: grey goo) and wants to monitor the galaxy. A defensive expansion is more likely — a no-brainer — compared to a rapid colonization, which has the downside of creating potential future competitors. A civilization that interconnects into a big internet-brain may have little use of distant colonies and expand at a rate much lower than 1% of the speed of light. In the defensive expansion scenario, the civilization will still rapidly send robot factories to build drones that will monitor all interesting planetary systems, and be ready to unleash destructive force to anything that looks threatening. Incidentally, UFOs are becoming mainstream. If UFO reports are to be believed (OK, a big IF) then the reported UFOs are acting exactly as expected from drones who inspect things, are unconcerned about us, and are ready to engage in case anything they deem threatening appears. Which raises the important question of what they might deem threatening. Or perhaps, aliens are among us in the quantum realm or in some other unexpected physical form. The exponential technological progress has to reach one or a few phase transitions, after which all bets are off. To advanced aliens, components such as neurons or silicon transistors will seem hopelessly bulky and inefficient as computational building blocks. Hence, as a colleague pointed out, SETI is severely outdated using technology and reasoning of the 1950s to search for aliens and should broaden its scope and methods. I bet Carl Sagan — my childhood hero and pioneer of SETI— would agree.

A 2004 encounter near San Diego between two Navy F/A-18F fighter jets and an unknown object. Photo: US Department of Defense.

Technological civilizations are unlikely. This is the explanation I find least likely (rather, I leave room for an entirely different explanation, such as a specific and compelling hypothesis of why a sufficiently advanced civilization finds the visible universe uninteresting or explores it invisibly). Intelligence has most likely only evolved once on Earth in terms of nervous system; however, higher intelligence has evolved independently multiple times. Orangutans and chimps, dolphins and whales, elephants, ravens and crows, kea and African Grey parrots, and very independently octopuses and squids, have remarkable intelligence. Many species use tools. We are first to develop technology on Earth, but isn’t it a stretch to assert that if we weren’t around no other species on Earth would develop technology in the next 100 million years? Or 1 billion years? What if life is vanishingly unlikely. Again, I don’t think that’s a robust explanation. The first step of life cannot be unlikely: while liquid water appeared on Earth 4.4 billion years ago, the first evidence of life may date back to 4.3 billion years ago, which hints to life originating quickly in geological terms once conditions are right. If any step in the evolution to intelligence was vanishingly unlikely, that step would most likely have taken a disproportionately long time on Earth. That is not what we observe: the last universal common ancestor appears about 3.5 billion years ago (bya) after a steady evolution of basic biomolecular functions; photosynthesis appears 3 bya; land microbes 2.8 bya; cyanobacteria’s oxygen photosynthesis 2.5 bya; eukaryotes 1.85 bya, land fungi 1.3 bya, sexual reproduction 1.2 bya; marine eukaryotes 1 bya; protozoa 750 million years ago, and so on, steadily evolving into intelligent species in the past few hundred million years. The coarse-grained breakdown of evolution’s steps in the early billions of years reflects our lack of data on the ancient progression of molecular biology rather than any single vanishingly unlikely event.

Photo by Aziz Acharki on Unsplash

Incidentally, I want to urge against jumping to the anthropic principle and stating that there is nothing puzzling about seemingly being alone because the sole intelligent civilization necessarily is puzzled about being alone. The anthropic principle is quite unsatisfying to begin with in cosmology. However, at least in that case we have a single observed event to explain — the universe and its cosmological properties — and no expectation of observing other similar events (i.e., other universes). In the case of the Fermi Paradox, because there may be yet unobserved civilizations lurking around, we have to weigh any theory of us being alone against some prior probability of it being true. Given our observations on Earth, the prior probability we assign to technological civilizations cannot be vanishingly small — everything points to steady biochemical and then organismal evolution from formation of water all the way to intelligent tool-using species — therefore we have to make every effort to completely exclude other explanations before we jump to the conclusion that we are alone.

So where does this leave us? I hope (1) is false. (2) is no good news either. (3) is wishful thinking, or perhaps scary too. I would love to see a better explanation. If you have your favorite explanation in mind, or thoughts to share, please comment below!

Want more stories like this? Be sure to check out more from Mission!🧠 👉 Here!

--

--

Serafim Batzoglou
Mission.org

Chief data officer @seer; co-founder @DNAnexus; prof. computer science @StanfordAILab, 2001–17; genomics, biomedicine, AI