Bad decisions make good stories. Hostile-by-default-on-contact based plots in pretty much all the alien sci-fi — from comic books to novels to movies — subscribe to this notion over and over again.
An exocivilization advanced enough to come in contact with another civilization will never explore the possibility to meet each other on unfriendly terms. We discussed this quite thoroughly in the previous part of this two-part series on assimilating Fermi Paradox.
By the end of the first part, we narrowed down the prospective possibilities in three general buckets:
Where is everybody?
Most of the common approaches to address the Fermi paradox, and the assumptions around how exo-civilizations are likely to interact with others, are too rudimentary for any alien civilization advanced enough to stumble across another one.
Fermi paradox is only paradoxical (if at all, it’s) because of our poor understanding of temporal causality. Its premises, as well as most of the possible explanations rely heavily on time being linear & unidirectional. They seem to make some sense “going forward” but fail miserably to explain the same events in retrospect.
We also tend to miscalculate the rate of…
This is not a work of scientific literature; this is a hypothesis at best. This article assumes the reader’s prior understanding of (artificial) intelligence, the plausibility, timelines & risk-factors.
TL;DR: the best way to achieve superintelligence is not through ANI → AGI → ASI — it is through a symbiotic/hybrid (human-machine) intelligence iteratively achieving exponential growth.
Presently, AI status quo is directed towards building a safe AGI.
The discussions, policies & narratives are dominantly us vs. them (humans vs machines). There’s also an insurmountable effort to focus on developing AGI, without enough contingency plans for what happens if/when we get…