Behind the Veil of Complexity: A Solution to the Fermi Paradox
Why we might live in a cosmos bursting with super-intelligent civilizations without realizing it.
The Fermi Paradox
Interstellar travel is hard. Even the fastest spaceship ever devised based on technology available today (let alone built) could have reached a maximum speed of three percent of the speed of light. Thus, it would have taken it roughly 130 years to reach Proxima Centauri — the nearest star. Given such numbers it might seem that interstellar travel and interstellar colonization were never in the cards.
All this may be true from our current perspective, but if we take a more civilizational view on the matter, things do not look that bleak. Let’s do some conservative number crunching. If it is true that we could in principle reach three percent of light-speed today, it should be realistic that, if we don’t die out soon, it will be possible to travel to the closest stars at a large scale at a third of that speed within the next few centuries. To be totally save, let’s say that settling the closest stars may take us about a thousand years. Now, given that these colonies will not start from nothing, they should in principle be able to do the same. This leads to the assumption that one could settle the galaxy at a rate of five light-years in a thousand years. Thus, on quite conservative assumptions, it should be possible to settle the whole galaxy within roughly 16 million years (its 78.000 LY to its furthest edge from here).
The calculation may seem like a useless exercise until one considers that the galaxy is 13.6 billion years old and that stars and planets roughly similar to our own were around for many billion years. More than enough time to settle the galaxy a couple of hundred times under, I repeat, conservative assumptions. From the standpoint of the natural history of the galaxy, the time needed for galactic colonization is a mere fluke.
Now it is natural to assume that humans are not particularly special. Thus we should probably think of technological civilizations as natural stages of the evolution of some star systems. Considering the age of the galaxy it is thus likely that many such civilizations have appeared. Together with our assumption that interstellar colonization is possible and the assumption that because it is possible, some civilizations will engage in it, it follows that we should arguably see some signs of alien life around us. Considerations like these may have been discussed in the now historical conversation about alien civilizations over lunch when physicist Enrico Fermi asked “Where is everybody?” Since then, this problem is known as the Fermi Paradox. For the sake of discussion we can formulate the paradox as a trilemma:
A: Given what we realized about the cosmos advanced alien civilizations should be all around us.
B: If there were advanced alien civilizations all around us we would have realized this by now.
C: We have not realized the presence of advanced aliens so far.
Obviously, one of these has to go. We can thus differentiate three kinds of solutions, namely those that deny thesis A, B and C respectively.
The most obvious solutions are type-A solutions, i.e. solutions that deny some or all of our above reasoning. Maybe interstellar colonization is much much harder than we have made it seem. Maybe alien life almost never occurs. Maybe such life does occur but rarely ever crosses the boundary to technological civilization. Or, here comes the downer, technological civilizations have the inherent tendency to destroy themselves shortly after reaching spaceflight capability.
All these are possible but none convinces me entirely. Life seems to be the natural consequence of complex chemistry and time. For all we know life started roughly when earth was cool enough and possessed the requisite chemistry. If life’s emergence is a cosmic fluke, wouldn’t you expect it to happen at some random point, say after two billion years of boredom? The emergence of intelligence seems to be a natural byproduct of the evolution of life and civilization seems to be a natural byproduct of this emergence at some point. Finally, I might certainly believe that many civilizations manage to wipe themselves off the map. But it seems strange to suppose that this is some kind of universal law. Type-A answers are not wholly convincing.
What about type-C solutions, i.e. solutions that deny that there is no evidence for alien visitation? Such solutions, from ancient aliens to UFOs, obviously attract the more speculative minds. But it seems that if there is such evidence it is weak at best.
For most of the time thinking about the paradox, I always thought that type-B answers were in fact the least convincing ones. I thought it was totally clear that if the universe was full of alien life, we would have quite unambiguous evidence for this. Those who deny this may speculate that maybe there is some Star Trek style law that prohibits alien civilizations to interfere with inferior races. Or maybe the best way to settle other worlds is not to send a huge starcruiser but a tiny probe that carries genetic material to seed your kind around the universe. This would imply the thought provoking consequence that we are the aliens or at least their descendants. But both of these options rely on a huge amount of guesswork.
I now will argue that there is a form of type-B solution that may be one of the most plausible solutions to the paradox yet. In effect, I will argue that we should expect highly advanced alien civilizations to be practically invisible for our eyes and instruments.
Intelligence is Compression
My idea was triggered by Marcus Hutter’s article Can Intelligence Explode? Hutter is a theoretical AI scientist that has made huge contributions to our understanding of the nature of intelligence. Central to his ideas is the notion of the relation of compression and intelligence. We know compression from dealing with .zip files. A compression procedure takes some code and returns a shorter compressed code such that a maximal amount of the original can be recovered from the compressed version. It does this by exploiting patterns within the original. If you compress a music file, if the compression procedure is worth anything, then after decompressing it it will sound exactly like the original while taking less space of your hard drive.
Central to Hutter’s work is the idea that intelligence equals compression ability. Or perhaps we may say in a philosophically less controversial way, higher intelligence entails better compression. Consider physics as an example. The progress of physics can be seen as an attempt to compress the complexity of the natural world onto a simple underlying equation — a simple compressed piece of code. Generally, the abstraction of high level patterns from noisy data can be conceptualized as an attempt at compression.
In his article, Hutter discusses the notion of an intelligence explosion, that is of the idea of an artificial intelligence smart enough to improve itself and thus become smarter exponentially. What would the inner workings of a super-intelligent artificial intelligence look like from the outside? Arguably, it would look like pure noise, that is like random activity. The reason is simple. If intelligence entails compression then super-intelligence will entail near perfect compression. What does perfect compression look like? It looks essentially like noise from the outside. Consider you build a compression device that consistently returns triplets of 1s and 0s, like 000111111000000000111. Obviously this is a bad compression! For one could easily compress it further to 0110001 by replacing the triplets by single digits. And this will be true for every easily discoverable pattern in the compressed code. Thus in perfect compression all discoverable patterns have already been removed. And code without discoverable patterns just looks like noise.
I suspect that this mechanism is behind a whole lot of conundrums of intelligence. It explains why it is so exceptionally hard to make sense of the neural coding of the brain. If it were easy we would arguably be too dumb to figure that out. Similarly it explains why AI scientists are increasingly troubled by the task of understanding the systems they themselves have built. Put briefly, if something is easy to understand it is probably not very intelligent. But if this is true, may we not find that super-intelligent aliens may be invisible to our gaze?
The Veil of Complexity
My proposed solution to the Fermi Paradox is that advanced aliens that are able to pay us a visit have probably advanced beyond their original biological forms. But it would arguably be a mistake to think of such alien super-intelligences as clunky CPUs spread across their home planet or as armies of Cyborgs. Rather, the further such super-intelligences advance beyond our own stage the more they will appear to as as mere sources of thermal radiation, i.e. noise. Thus, such a godlike mind could even share our own solar system with us without making themselves known to us.
One objection to this reasoning is to hold that super-intelligences will arguably rely on vast amounts of energy and thus we should be able to see the remnants of this hunger of energy. But this is not necessarily so. Consider for instance that while our current civilization burns huge amounts of fossil fuels, should we be around for another century or so we will definitely have managed our full transition to regenerative energy sources. (The reverse holds true, too. Should we not manage this, we will not be around for more than another century.) Suns are the ultimate regenerative sources of energy and we should expect alien super-intelligences to use them as such. So what would the ultimate brain look like? There are certainly a number of way to realize a computer that utilizes the full energetic potential of a star to turn it into thought, like building a Dyson sphere around it. But there may be ways to use its energetic output in a more direct fashion. That is to say, maybe the ultimate brain would to us just look like a star.
There are some open threads here. From our current engineering perspective, it is easier to realize computation in cooler mediums, essentially because this ensures that thermal noise does not interfere with the computational process. But this of course misses my whole argument: My thesis is that it is precisely what looks like thermal noise that is the outcome of a immensely complex intelligence. What looks like noise really isn’t and so as far as I can see these considerations do not apply. Furthermore, the idea of a veil of complexity is not intrinsically tied to the idea of stellar brains. Rather, I presume that it could be a general mechanism that explains why finding advanced alien civilizations is hard. Thus there is some hope that some kind of type-B reply may actually resolve the Fermi paradox.