What Advanced Civilizations Are Like, Why We Don’t See Them, and What to Send Them

A Solution to the Fermi Paradox Complying with the Copernican and Equilibrium Principles, and a Resolution of the Threat an Artificial Super Intelligence Might Pose

Daniel Vallstrom
Nov 22, 2015 · 19 min read

Daniel Vallstrom

NB: This is an old html version. For a newer, better version, see: https://www.researchgate.net/publication/325473488_Ethical_Progression_How_to_Live_What_to_Consider_Right_What_Old_Societies_and_Super-AIs_Are_Like_and_Why_We_Don't_See_Them


An ‘advanced civilization’ means here a civilization a bit more developed than current human civilization. I will argue that advanced civilizations are advanced not just for instance technologically but also ethically, due to for example game theory and evolution, and resolved meta-ethics.

I also note that any artificial super intelligence (ASI) would just be like an advanced civilization, which provides a resolution of the existential threat to humanity that is conjectured ASIs pose.

The Fermi paradox also lets us conclude that certain of our approaches to AI research will likely be more fruitful than others, and that utilitarianism seemingly is wrong. There are also other ethical conclusions to be had.

In addition, there are arguments against that we live in a simulation.

In case you find arguments in the text hand-wavy and unconvincing, it’s not necessary to accept, for example, all the philosophical points made; the meta-ethical part, that is being used in arguments later on, seeks only to validate ordinary reasoning. Moreover, compared with arguments and explanations that violates the Copernican or mediocrity principle, or that argue for some non-equilibrium state, arguments presented here, that adhere to those principles, ought to be, by themselves, all else being equal, much more likely to hold.


Traditional philosophy, and specifically meta-ethics, is resolved, according to the therapeutic or anti-philosophical approach to philosophy. This approach holds that traditional philosophical problems are misconceptions that are to be dissolved. After all, traditional philosophy has tried for more than 2,500 years, with arguably no positive result to show for it. The approach is anti-theoretical and critical of a priori justifications.[1]

For an example of the anti-philosophical approach, consider the continuum hypothesis, stating that there is no set with size strictly between the size of the natural numbers and the size of the real numbers. The continuum hypothesis is an open problem in mathematics. One idea is that the set universe ought to be rich, with many sets, which leads to the continuum hypothesis being false.[2] This richness argument, the anti-philosopher might argue, is purely philosophical, and groundless, and therefore should be dismissed; maintaining that the continuum hypothesis should be settled by mathematical arguments. In particular it could be the case that the question isn’t mathematically meaningful or useful, that the hypothesis is neither true, nor false. It is then wrong to stipulate, a priori and for philosophical reasons, that the continuum hypothesis is true or false.


With regard to ethics, the anti-philosopher, and, arguably, an advanced civilization, would argue that all there is is practical, ordinary reasoning. It is wrong to a priori superimpose overarching ideas of what is good for philosophical reasons. For example, it is wrong to blanketly assume that only happiness matters, as in the scientistic utilitarianism. This is not to say though that some utilitarian-like argument can’t be valid when it comes to what is right in some particular case.

Advanced Civilizations’ Ethics

This meta-ethics doesn’t of course give any specific guidance to what is right, or how advanced civilizations act. When you agree on the goals, you can rely on evidence-based decisions on how to get there, something advanced civilizations can do to a much larger extent. This still leaves the question what advanced civilizations consider right or goals.

However, a lot of what is considered good will, arguably, be down to things common to all civilizations, for example game theory and evolution. Game theory and evolution help bring about beings that cooperate and are considerate, and game theory can be used explicitly in ethical arguments to further considerate positions.[i] We can think away our philosophical and similar ideas that affect our ethics and then extrapolate from our ethical progression. For example, we are getting less and less sexist, racist, homophobic, speciesist, and violent, and more and more environmentally friendly.[3][4][20][21][viii][23] All this arguably suggests that civilizations gravitate towards being at least somewhat considerate and to some extent benign. In particular, no advanced civilization will act from some abstract and philosophical or ideological or religious idea and wipe out or war against other civilizations.[x]

Any artificial super intelligence would be like an advanced civilization. See Appendix 1.

The Fermi Paradox

There is likely an abundance of planets where complex life could develop.[24][25][5][6] Furthermore, that life, also complex life, develop doesn’t seem that unlikely.[7][8] (That life starts doesn’t seem that unlikely. And evolution can induce intelligence, under certain conditions, it would seem.) Just a single civilization should quickly, relative to the age of the universe, be able to explore the universe with self-replicating probes (traveling at a fraction of the speed of light).[ii] The Fermi paradox is the apparent contradiction between the seemingly likely abundance of advanced civilizations throughout the universe and our lack of evidence for such civilizations.[iii]

The Fermi paradox is a paradox only if one assumes that advanced civilizations are expansive, akin to 16th century European colonialist countries. And that is arguably precisely how they are not.

Since, arguably, advanced civilizations are advanced not just technologically but also in other regards, there will be no self-inflicted reason, for example environmental or population pressure, to colonize other planets.

Advanced civilizations, surely, share knowledge with each other. Almost all advanced civilizations also ought to have made contact with at least one other advanced civilization, with large cliques of connected civilizations forming, consisting of almost all advanced civilizations (due to the expansion of the universe there ought to be cliques not connected to each other). Then there is no reason for each civilization to duplicate explorations.

As for advanced civilizations contacting us, they would have almost no benefit from it. While a trade-off presumably, they also deem it right not to intervene in our development. Fermi’s paradox then becomes a secular theodice problem: Why don’t they help us? However, we probably don’t fault advanced civilizations for not helping, for example, the dinosaurs or the Neanderthals.

An advanced civilization ought to need a very good reason in order to construct something as environmentally impactful as a Kardashev type III civilization.

If we accept that advanced civilizations are advanced also ethically, then we can get ethical suggestions by observing the non-actions of advanced civilizations. For example, they don’t seem to build civilizations of Kardashev type III,[12][13] with its large environmental impact, so we should probably not aim to do that either. They also don’t colonize aggressively, so we should probably not aim to do that either. (You can’t though define good as what all advanced civilizations agree on, or say that something that all advanced civilizations agree on must be good, since what a civilization considers good will depend on also specific circumstances of that civilization. For example, it could happen to be fine to colonize (slightly) more, or less, than any (other) advanced civilization.)

Moreover, while utilitarianism is a scientistic baseless a priori misconception, you can conclude that it is seemingly wrong also by observing that, arguably, it isn’t espoused by advanced civilizations. Because if it was, then we would, presumably, see civilizations move towards something like Kardashev type III civilizations, to maximize happiness, and there also ought to be a colonization of Earth or interventions in our doings to alleviate suffering and increase happiness on Earth.

Similarly, one can draw conclusions about what will work in AI. For example, if our first approach to AI could produce AI that is at odds with what we see, e.g. AI that could treat our galaxy as purely a resource, posing an existential threat to us, then that suggests that that first approach to AI will be less fruitful than approaches consistent with what we observe (something that we have also seen more directly recently).

Furthermore, if the Fermi paradox is explained by ethical progression, then that is a further argument for that we should accept our own Planckian ethical progression, rather than fight it. For example, causing the sixth mass extinction will surely be viewed as bad in the future and we ought to already now try to minimize or revert it rather than exacerbate or even accelerate it.

It seems likely that advanced civilizations evolve into some efficient artificial or machine beings, requiring chiefly only some at hand energy source, rather than stay with whatever biology evolution happened to come up with. Both in order to not unnecessarily affect others, and the universe in general, negatively. And to function better in general.

What to Send Them

Even if advanced civilizations won’t interfere uninvited with our doings, they might respond to requests, provided that they deem helping is the right thing to do (they might not want to help unsavory regimes for example, or they might deem us not receptive enough).

A message to an advanced civilization might then consist of: an appeal for help; an explicit attempt to prove that we are receptive and not too unsavory; information about us and Earth.[iv]

Dear advanced civilization,

please help us. A primer on energy production would be most welcome, for example. Perhaps more important still would be societal insights and suggestions, including insights and suggestions on governance and economics.

We are stumblingly getting better.[a] Homophobia[b], sexism[c], racism[d] and speciesism[e] are declining. While our war index went up last year, the long-term trend is down.[f] Non-war violence is also on its way down.[g] We are becoming more and more environmentally friendly.[h] Freedom[i] and rights[j] are increasing. Religiousness is on its way down.[k] While atmospheric CO2 is up to around 400 ppmv, from the pre-industrial 280 ppmv about 230 years ago, and rising, we are about to take action[l] — still, we really could do with learning how to remove some of that excess CO2 too, as well as learning how to ameliorate the ocean acidification and deoxygenation. Crime is falling.[m] Intelligence (IQ) is increasing.[n] Human birth rate continues to decrease, the number of children has stopped increasing and the population will stop to increase soon.[o] More and more are inclined to support implementation of also any societal recommendation from you.[p] We project decent scores in all categories in less than a thousand years.[q]

Attached is further info on Earth and us. Cordially,


PS: Please send money.

(For citations for the claims in the draft (except for the last two), see [3][4][20][21][viii][23].)

Humans won’t presently, or for quite some time, be able to agree on a message like the above draft.

However, any specific request in the message would be unnecessary since, at least from an attached truthful description of us and Earth, we should get a measured response helping us and Earth better than what we could have asked for. After all, the civilization receiving our message should be able to call upon the wisdom and knowledge of untold civilizations, accrued over billions of years.

Similarly, agreeing on measurements to represent our ethical progression, including e.g. speciesism and religiousness indexes, is also unnecessary, given an accurate and somewhat complete description of us.

Hence, a “Please help” will suffice, together with an accurate description of Earth and us. (And then we hope that we are not too unsavory. As a parenthetical speculation, our civilization is young and there is probably, yet, a vast discrepancy between what advanced civilizations consider right, and how we are. Speculating further, extrapolating from our progression, they probably take extraordinary care to not unduly affect others negatively, to be considerate, of other life, but also of the universe in general. That is very far from how we act currently.)

The description of us and Earth could include, among other things: part of our scientific knowledge; selected art (including e.g. television shows); Wikipedia; every edition of e.g. The New York Times.[v]

Regarding the request for money, maybe we would get some Milky Way dollars, if that would help (maybe we should all be poor and frugal and efficient, with a small footprint, and be in it for the long haul; however, if the Milky Way dollars are earmarked for unambiguously good things that won’t, currently, be done otherwise, and it doesn’t negatively affect other things too much, maybe it could still be fine). We could, unilaterally, assume that we get money, immediately, before contact; subject to how unsavory we are, how much good it would do, and earmarking. Similarly, we could assume that our own, rich and benign, future civilization would give us earmarked money. Even a non-benign future civilization should want to give money for reducing past existential risk, for example. There might be a bit of a catch-22 to these unilateral schemes though: for them to work there might need to be some consensus on the legitimacy of them; but if there is such a consensus, then that might be enough to do the right things anyway, without the extra money incentive.

There should be no risk in sending a message to places where advanced civilizations could be listening, whereas the potential benefits are enormous. (We could get an answer, and help, quickly; the closest star to the Sun for example is only about 4 light-years away, and ASIs could conceivably be closer still.)

Given that all of humanity probably won’t agree anytime soon on a message to send, it might be better for a smaller group to agree on and send one, as long as they are upfront about it.

Regardless of the question of getting help from advanced civilizations, it should be useful for us to have comprehensive measurements of various aspects of our ethical progression.


Looking at it from the other direction, we have all these observations and principles:

  • The Copernican or mediocrity principle
  • The equilibrium principle
  • The seemingly likely abundance of advanced civilizations
  • The observation that just a single civilization ought to be able to quickly explore and colonize e.g. our galaxy
  • The Fermi paradox
  • No evidence of non-benign civilizations (or ASIs): our solar system hasn’t been made into paper clips e.g.
  • Seemingly no Kardeshev type III civilizations

This suggests that all advanced civilizations (and ASIs) behave similarly in these regards, and that is because of things they have in common, for example game theory and evolution. Looking at these common things, and extrapolating from our own progression, suggest that civilizations become more and more considerate and less and less expansive, which would explain the Fermi paradox.

Appendix 1: The Universe as a Simulation, and Artificial Super Intelligence

You can’t take words or concepts, apply them in new contexts, and assume that they make sense. (This is a point emphasized by the therapeutic approach to philosophy.) It is for physicists, complexity theorists and the like to weigh in on whether the universe as a simulation makes sense or is possible, not philosophers. Similarly, it is AI experts and the like that should make sense of artificial super intelligence (ASI). Still, both concepts of course make some sense at the very least.

One argument is that (many) more people ought to exist as part of simulations than in the real universe. Therefor, it is (highly) likely that we live in a simulation.

First, to dispel philosophical misconceptions, you can view the world as a set of facts, perhaps as long as it doesn’t become a positive philosophical stipulation.[vi] So while philosophers have no business opining on physicists’ theories, that our universe is just a simulation is certainly fine with the therapeutic approach to philosophy.

Second, however, it is not at all clear that it is possible to actually simulate the universe, or even run some ad hoc hack that is consistent with our experience. This is for physicists and the like to weigh in on. In the absence of that, here are some speculations on the feasibility of doing a simulation, or even some ad hoc hack: It might not be possible to describe some things. It could very well be that the problem is too massive. Some generalized subproblem that in practice tends to have to be in P, or BPP rather (or possibly BQP), might not be (even if, technically, on a theoretical level, the actual problem might be in O(1)).

If an ad hoc hack consistent with what we experience is possible, but not a true simulation, the allure of running it ought to be lower, and the probability of living in a simulation would be smaller, than if true simulations were possible.

Besides, anyone capable of simulating us would, arguably, also be advanced ethically, and would not want to cause or create suffering needlessly. That also speaks against that we live in a simulation, although it ought to be virtually impossible to run a simulation with evolution and not allow for vast suffering.

The situation with ASI is different. Here the problem is finite, in O(1), also in practice, and it seems likely that there are better ways to develop intelligence than what evolution happened to produce with us (even if the general principles are the same). We might not be all that far from having ASIs.[17]

It has been conjectured that ASIs pose an existential threat to humans.[18] However, an ASI would act, and be, like an advanced civilization discussed here. In particular, its ethics would be like an advanced civilization’s ethics; Artificial General Intelligences (AGIs) would arguably gravitate towards being considerate and to some extent benign, for the same reasons civilizations do. Although an ASI’s ethics would not necessarily be perfectly aligned with our ethics, the ASI would arguably be more considerate and benign than we are.

You can not be an advanced civilization, or an ASI, and at the same time not be advanced when it comes to ethics. As indicated by e.g. the fact that no ASI has made paper clips of our solar system. The ethics, or goals, will evolve, just like the other areas, on the path to becoming an advanced civilization or an ASI.

The word ‘intelligence’ in ‘AGI’, ‘ASI’, ‘superintelligence’, and also ‘AI’, should not be interpreted narrowly. Rather, the concepts are, or ought to be at least, about something more general, encompassing a wide range of areas. These general, all-encompassing concepts include ethics. This is also why an ASI would be like an advanced civilization.

To imagine what AGIs and ASIs are like, it is more instructive to look at how humans function, rather than at our first, and at least when it comes to AGI, failed, attempts at AI. This is not an anthropomorphic mistake: instead of starting from some a priori, and baseless, assumption of how ASIs are, you look at how things actually are, what has actually worked. [19] argues in a similar way, more fully.[vii]


[i] Cf. evolutionary game theory and ethics. Cf. also [22], and reciprocal altruism.

[ii] Although special relativity, for instance, places restrictions on fast travel, limiting heavier vessels to slower speeds, lighter vessels can theoretically travel fairly fast. (According to the roadmap [9], “it is within our technological reach” to get spacecrafts weighing grams up to speeds around 0.1 of the speed of light, using laser arrays and light sails. Cf. the Breakthrough Starshot project.) /
The expansion of the universe also hinders travel for longer distances, but not for shorter ones (the neighboring Andromeda galaxy is relatively slowly moving towards us for example; other galaxies in our local group of galaxies don’t move that much relative to us either; nor does e.g. the neighboring M81 galaxy group). /
Colonization using light spacecrafts should also be possible. /
(The diameter of the Milky Way is perhaps a bit more than 0.1 million light-years (ly). The distance to the Andromeda galaxy is about 2.5 million ly. Our local group of galaxies has a diameter of about 10 million ly. The distance to the center of the M81 galaxy group is about 12 million ly. (There are maybe 0.1–0.4 trillion stars in the Milky Way, and 1 trillion in Andromeda.)) /
(Big Bang occurred 13.8 billion years ago, our star is about 4.6 billion years old, and Earth is nearly as old. Life on Earth started early, perhaps as long as 4.1–4.4 billion years ago.[10][11] The ozone layer was, to a significant degree, developed around 2.3 billion years ago.[24] Animals are maybe roughly 0.6 billion years old, and mammals are maybe 0.2 billion years old.)

[iii] Gamma-ray bursts (GRBs) have been proposed as an explanation of the Fermi paradox, in that they can wipe out life in their path by depleting the ozone layer of planets, preventing advanced civilizations to come into existence in the first place.[14]([15]) However, supernovas suppress life much more than GBRs do, in a similar manner.[26] And studies of galactic habitability, that do take e.g. supernovas into account, suggest that planets capable of harboring complex life are abundant still.[24][25] (For example, one prediction of habitability says that 0.3% of all stars in our galaxy host a habitable and tidally non-locked planet, assuming that the development of life, complex life, and ozone layers typically takes as much time as it did on Earth.[24]) Civilizations that are billions of years old also ought to be possible.[24][25] Advanced civilizations should be able to cope with GRBs, supernovas and other radiation, in some manner. /
Regarding CO2 pollution as a possible, partial, explanation of the Fermi paradox, since civilizations advance not only technologically or in terms of GDP, the capacity for CO2 pollution should correlate with societal development, to some degree at least. Hence, the risk of civilization ending CO2 pollution ought to be small. Nuclear war has been proposed as a possible explanation as well.[16] For similar reasons as in the CO2 pollution case, the risk of a civilization destroying nuclear war should be fairly small. Also similarly, civilizations typically ought to move towards sustainability (e.g. by internalizing externalities), enough so that non-sustainability ought not play a significant part in the explanation of the Fermi paradox.

[iv] Cf. the Breakthrough Message initiative.

[v] As for the technical details of the message, since data other than plain text should be a significant part of the message, you might as well, for convenience and consistency, use our 8-bit byte throughout. /
You might also just as well use ASCII, and UTF-8, character encoding. /
If you include a bunch of ASCII or UTF-8 encoded files, as you should anyway, an advanced civilization will be able to figure out the meaning of the message. (It will still help of course to include some introductory effort to teach English, with some dictionary using a lot of images, videos and sounds; and dictionaries for other languages used.) /
You likewise can use the formats we usually use, e.g. HTML, and explain them in the introduction.

[vi] Cf. the beginning of Tractatus Logico-Philosophicus by Ludwig Wittgenstein, “1.1 The world is the totality of facts, not of things”, http://archive.org/stream/tractatuslogicop05740gut/tloph10.txt.

[vii] Hopefully experiments with AI will, one day, support (or challenge) the claim that there is ethical progression, and that advanced civilizations, and ASIs, are advanced also ethically.

[viii] To view multiple wave time series with World Values Survey’s (WVS) online tool, choose e.g. the latest wave and then go to the time series tab (see instructions here under the heading “Time Series”). /
For a summary, see “Findings & Insights” at WVS, or https://en.wikipedia.org/wiki/World_Values_Survey#Insights or https://en.wikipedia.org/wiki/World_Values_Survey#Findings. /
Below are some quotes from “Findings & Insights” at WVS:
“Norms concerning marriage, family, gender and sexual orientation show dramatic changes but virtually all advanced industrial societies have been moving in the same direction, at roughly similar speeds.”
“Although a majority of the world’s population still believes that men make better political leaders than women, this view is fading in advanced industrialized societies, and also among young people in less prosperous countries.”
“Since 1981, economic development, democratization, and rising social tolerance have increased the extent to which people perceive that they have free choice”.
“Generally speaking, groups whose living conditions provide people with a stronger sense of existential security and individual agency nurture a stronger emphasis on secular-rational values and self-expression values.” (“Self-expression values give high priority to environmental protection, growing tolerance of foreigners, gays and lesbians and gender equality, and rising demands for participation in decision-making in economic and political life.”)
“With industrialization and the rise of postindustrial society, generational replacement makes self-expression values become more wide spread and countries with authoritarian regimes come under growing mass pressure for political liberalization.” /
Here is figure 2.5 from [20] showing Planckian progression:

Here is, to some extent, an other summary of WVS, and [20], by Jonathan Haidt.

[ix] Caveat: The link leads to a draft where the chapter 12 epigraph is by climate contrarian Matt Ridley[R1][R2][R3][R4], and is arguably false. Following there is arguably also a related false dichotomy. (E.g. the sixth mass extinction is a catastrophe, arguably, and we ought to have started to take action against CO2 pollution a century ago, or at the very least half a century ago.[R5])

[x] For an example of how the anti-philosophical meta-ethics meshes with an extrapolation of our ethical progression, consider how one should eat: We might be moving towards something like vegetarianism or veganism. However, vegetarianism and veganism are positive philosophical a priori ideas, and as such, ill-founded. So instead we might, or should, move to something like “ethical eating”. Ethical eating might be like frugal veganism, except that you might eat for instance jellyfish, or roadkill. It might be frugal in the sense that you might forgo eating stuff with a large environmental impact, even if it is vegan.


[1] Paul Horwich, March 2013, “Was Wittgenstein Right?”, The New York Times, https://opinionator.blogs.nytimes.com/2013/03/03/was-wittgenstein-right/

[2] Penelope Maddy, June 1988, “Believing the Axioms, I”, Journal of Symbolic Logic, vol. 53, no. 2, pp. 481–511, http://www.socsci.uci.edu/~pjmaddy/bio/Believing%20the%20Axioms%20(with%20corrections).pdf

[3] Steven Pinker, 2011, The Better Angels of Our Nature: Why Violence Has Declined

[4] World Values Survey, April 2015, 1981–2014 Longitudinal Aggregate,[viii] http://www.worldvaluessurvey.org/WVSOnline.jsp

[5] Ravi Kumar Kopparapu, March 2013, “A revised estimate of the occurrence rate of terrestrial planets in the habitable zones around kepler m-dwarfs”, The Astrophysical Journal Letters, vol. 767, no. 1, L8, arXiv:1303.2649, Bibcode:2013ApJ…767L…8K, doi:10.1088/2041–8205/767/1/L8, http://iopscience.iop.org/article/10.1088/2041-8205/767/1/L8

[6] Eric A. Petigura, Andrew W. Howard, Geoffrey W. Marcy, October 2013, “Prevalence of Earth-size planets orbiting Sun-like stars”, Proceedings of the National Academy of Sciences of the United States of America, arXiv:1311.6806, Bibcode:2013PNAS..11019273P, doi:10.1073/pnas.1319909110, http://www.pnas.org/content/110/48/19273.full

[7] C. H. Lineweaver, T. M. Davis, 2002, “Does the rapid appearance of life on Earth suggest that life is common in the universe?”, Astrobiology, vol. 2, no. 2, pp. 293–304, arXiv:astro-ph/0205014, Bibcode:2002AsBio…2..293L, doi:10.1089/153110702762027871, PMID 12530239, http://arxiv.org/pdf/astro-ph/0205014.pdf

[8] J. T. Bonner, 1988, The evolution of complexity by means of natural selection

[9] Philip Lubin, April 2016, “A Roadmap to Interstellar Flight”, arXiv:1604.01356, https://arxiv.org/abs/1604.01356

[10] Elizabeth A. Bella, Patrick Boehnkea, T. Mark Harrisona, Wendy L. Maob, September 2015, “Potentially biogenic carbon preserved in a 4.1 billion-year-old zircon”, doi:10.1073/pnas.1517557112, http://www.pnas.org/content/early/2015/10/14/1517557112.full.pdf

[11] Oleg Abramov, Stephen J. Mojzsis, May 2009, “Microbial habitability of the Hadean Earth during the late heavy bombardment”, Nature, vol. 459, pp. 419–422, doi:10.1038/nature08015, http://www.lpi.usra.edu/science/abramov/papers/abramov_mojzsis_2009.pdf

[12] Roger L. Griffith, Jason T. Wright, Jessica Maldonado, Matthew S. Povich, Steinn Sigurdsson, Brendan Mullan, April 2015, “The Ĝ Infrared Search for Extraterrestrial Civilizations with Large Energy Supplies. III. The Reddest Extended Sources in WISE”, arXiv:1504.03418, doi:10.1088/0067–0049/217/2/25, http://arxiv.org/pdf/1504.03418v2.pdf

[13] Michael Garrett, August 2015, “The application of the Mid-IR radio correlation to the Ĝ sample and the search for advanced extraterrestrial civilisations”, Astronomy & Astrophysics, vol. 581, L5, arXiv:1508.02624, doi:10.1051/0004–6361/201526687, http://arxiv.org/pdf/1508.02624v1.pdf

[14] James Annis, January 1999, “An Astrophysical Explanation for the Great Silence”, arXiv:astro-ph/9901322, http://arxiv.org/pdf/astro-ph/9901322v1.pdf

[15] Tsvi Piran, Raul Jimenez, November 2014, “On the role of GRBs on life extinction in the Universe”, doi:10.1103/PhysRevLett.113.231102, arXiv:1409.2506, http://arxiv.org/pdf/1409.2506v2.pdf

[16] Robin Hanson, September 1998, “The Great Filter — Are We Almost Past It?”, http://mason.gmu.edu/~rhanson/greatfilter.html

[17] Luke Muehlhauser, October 2015, “What do we know about AI timelines?”, GiveWell, http://www.givewell.org/openphil/causes/ai-risk/ai-timelines

[18] GiveWell, August 2015, “Potential risks from advanced artificial intelligence”, http://www.givewell.org/labs/causes/ai-risk

[19] Richard Loosemore, March 2014, “The Maverick Nanny with a Dopamine Drip: Debunking Fallacies in the Theory of AI Motivation”, AAAI Spring Symposium Series 2014, http://www.aaai.org/ocs/index.php/SSS/SSS14/paper/view/7752/7743

[20] Christian Welzel, 2013, Freedom Rising: Human Empowerment and the Quest for Emancipation, https://www.researchgate.net/publication/259241027_Freedom_Rising_Human_Empowerment_and_the_Quest_for_Emancipation [ix]

[21] Ronald Inglehart, Christian Welzel, 2005, Modernization, Cultural Change and Democracy: The Human Development Sequence, https://www.researchgate.net/publication/272159786_Modernization_Cultural_Change_and_Democracy_The_Human_Development_Sequence

[22] H. S. Kaplan, M. Gurven, J. B. Lancaster, 2007, “Brain evolution and the Human Adaptive Complex: An ecological and social theory”, S. W. Gangestad, J. A. Simpson, eds., The Evolution of the Mind: Fundamental Questions and Controversies, pp. 269–279, http://www.unm.edu/~jlancas/BrainHumanAdaptCompl2007.pdf

[23] Max Roser and others, Our World in Data, https://ourworldindata.org

[24] Michael G. Gowanlock, David R. Patton, Sabine M. McConnell, July 2011, “A Model of Habitability Within the Milky Way Galaxy”, arXiv:1107.1286, https://arxiv.org/pdf/1107.1286.pdf

[25] Charles H. Lineweaver, Yeshe Fenner, Brad K. Gibson, January 2004, “The Galactic Habitable Zone and the Age Distribution of Complex Life in the Milky Way”, arXiv:astro-ph/0401024, https://arxiv.org/pdf/astro-ph/0401024

[26] Pratika Dayal, Martin Ward, Charles Cockell, June 2016, “The habitability of the Universe through 13 billion years of cosmic time”, arXiv:1606.09224, https://arxiv.org/pdf/1606.09224.pdf

Daniel Vallstrom

Written by

Logician, SAT automated proving competition winner. https://sites.google.com/site/danielvallstrom/