Humans For AI
8 min readApr 14, 2023

--

Balancing Progress with Responsibility: Insights from the Extropians and Singularity

By Sean Vanderaa

The desire to expand human knowledge granted us the ability to become the most dominant and developed species on the planet. However, as much as the drive to increase our intelligence is the reason for every propitious scientific breakthrough, it is also the reason for countless devastating consequences that resulted from a lack of foresight and understanding. Scientific progression is undeniably beneficial, but without careful consideration of how we achieve and pursue that progression, it is easy for unforeseen and negative repercussions to sneak into the mix. Due to this, what might otherwise be a positive, society-altering advancement can end in devastating ramifications that nullify any potential benefit.

This lack of precaution surrounding profound ideas of advancement is no better encapsulated than by the extropian intellectual movement. Entering the public realm with their first magazine in 1988 and led by philosopher Max More, who changed his name to reflect the credo of the movement, the extropians had the vision to develop an ever-lasting manifestation of intelligent life throughout the universe [1]. In order to do so, the extropians promoted the study of — and the restructuring of many societal institutions to accomplish — the achievement of the technological singularity [2].

Attempting to define the singularity is no easy task. Due to its characteristic nature of being unpredictable in consequence and theoretical in concept, even the scientific community has found little consensus as to what the singularity will entail. But, for the purposes of this article, “the singularity” will refer to the rapid self-generated progression of an artificially super-intelligent technology; one that is so beyond human intelligence, we cannot begin to fathom what its emergence will entail. A different concept that is similarly important, and which will also be discussed in this article, is the idea of a human-akin intelligent system or artificial general intelligence (AGI). This type of system is directly adjacent to that of human consciousness and is a stepping stone on the path to the singularity, but it is not nearly as powerful, extreme, or disruptive.

As fractious as the singularity would be, one of the beliefs the extropians held was that achieving it would eventually allow humans to transcend their physical forms — such as through the concept of mind-uploading, which would allow a human’s consciousness to be placed into a self-perpetuating physical machine of some sort — thereby eliminating suffering and death in its wake. Along the way, and through the AGI predecessor to the singularity, we would be able to utilize the newfound tool to solve many of the problems currently pressing human civilization.

But, as positively reformative as these theoretical hopes were, the extropians adopted a similarly-extreme method through which they planned to achieve them. Dr. Finn Brunton, a Science and Technology professor at the University of California, Davis, conducted a thorough exploration of the extropians and their beliefs in his book Digital Cash. His book highlights that the extropians sought to accomplish their other-worldly goals by religiously adopting a “frictionless […] unrestrained Austrian-style capitalism” [2] economic outlook. In essence, a system based on zero regulations, leaving no room for economic, safety, or legal considerations. This would allow corporations to operate by any means in order to speed the process of technological breakthroughs, thus “overclocking human civilization.” Money would be pumped into scientific study with no consideration of the human costs, because if the singularity could be reached, we would no longer have to worry about the limitations and needs of our physical bodies.

During an interview, Dr. Brunton stated that although the results of this system would be socially devastating, the extropians viewed that they had “a unique opportunity as human beings to ensure that some form of intelligent life would continue to spread over millions and billions of years throughout time and space, even if achieving that goal decisively entailed the end of humanity as we know it.” Although intelligent life in some form would continue to permeate throughout the universe, the human race would be so incredibly altered that it would become unrecognizable in form. Whether this was to happen immediately as the result of sci-fi movie-esque destruction or through the gradual degradation of current social institutions, the singularity would be so transformative and disruptive that “civilization” would take on an entirely different meaning.

Despite being on the fringes of the scientific spectrum — only having around 1,000 members at their peak — the extropians still managed to produce highly-influential and long-lasting reverberations that became embedded into the fabric of the technological community, “When you look at what their concerns were, and the people who adopted their ideas, you realize that they have shaped a lot of the ethos of the modern techno-utopian world,” said Dr. Brunton.

So, what can the extropians tell us about the current state of technology and scientific outlook? As Dr. Brunton highlights, “the extropians identified themselves with a goal far beyond the ‘common sense’ categories of human survival and prosperity,” and yet, their emphasis on producing highly-intelligent life and advancing the human race at an extreme rate runs parallel to many current views on technological advancement, especially in the realm of the singularity.

Singularity discourse largely began with a talk by professor and author Vernor Vinge at a NASA symposium in 1993. At the time, the concepts he addressed were that of science-fiction: a technological advancement with such profound consequences that it is impossible for humans to theorize or imagine what life after it would look like. But Vinge was adamant, not only claiming that this singularity was inevitable, but that it was right around the corner. “Once,” he said, science-fiction writers “could put such fantasies millions of years in the future. Now they saw that their most diligent extrapolations resulted in the unknowable . . . soon. Once, galactic empires might have seemed a Posthuman domain. Now, sadly, even interplanetary ones are.” [3]

Vinge went as far as to place a timeframe on the singularity occurring. “Just so I’m not guilty of a relative-time ambiguity,” he said,” let me be more specific: I’ll be surprised if this event occurs before 2005 or after 2030.” [3] Although by modern standards Vinge’s predictions are premature, most projections still place the singularity and human-like artificial intelligence occurring fairly soon.

Ray Kurzweil, a prominent futurist and computer scientist, predicts that human-like intelligence will be created by 2029, with the singularity occurring by 2045. [4] Hans Moravec, a computer scientist and robotics researcher at Carnegie Mellon University, predicts human-level AI by 2040 and the singularity by 2050. [5] A poll of top AI and machine learning researchers conducted in 2012 and 2013 by Vincent Müller and Nick Bostrom showed that 59% of researchers believed the singularity would occur within the next 50 years (or by 2062, respectively). [6]

The question of “when” the singularity might occur is a heavily-debated topic within the scientific community, consuming much of the overall airtime surrounding singularity discourse. Unfortunately, this leaves little space for assessing “why” it should. Although there is no going back — if the singularity is indeed possible, it will undoubtedly occur given sufficient time — there is a much deeper philosophical implication that surrounds our innate desire to constantly pursue more advanced and better goals. And, though it is impossible to eliminate our tendency to advance technology, by stepping back and recognizing that human expansion won’t result in a “We’ve won!” title screen, we can instead re-aim our resources to ameliorate our current institutions, creating healthier and less tumultuous lives for the vast majority of people on Earth.

Very often future scientific advancement is used as a promise to avoid finding remedies to current issues. As Dr. Brunton states, “Particular groups use the promise of, and the work on, new technologies as a way to sidestep addressing existing sets of problems; there is more money to be had in the proposal of the indirect solution to existing social issues through some vague technological breakthrough that could potentially happen.” Instead of directly working on improving our current establishments, we fund tangential solutions to our systems, oftentimes even disregarding already existing ones.

The singularity, like many other tools that have promised to fix the issues confronting humans, is a mirage of false hope that places the burden of solving human suffering on generations to come. Even further, given that there is little evidence to show that technological advancement has been used to solve societal inequities, why should we expect that the singularity will finally make doing so the priority? Just as the extropians viewed human suffering as a necessary expense that will result in the achievement of eternal life, these promises ride on the notion that the singularity can occur. But, as a juxtaposition to the 59% of researchers that believe that the singularity will occur, the other 41% believe it never will. [6] Banking on a 9% margin to solve societal issues 50 years down the line is a dreary, dystopian, and potentially devastating outlook.

Even more so, there is little debate about what can and will occur within the time period in which the development of this technology takes place. While it may be able to solve our societal issues at an unprecedented rate, what do we do to help neglected people during the time it takes to get there? And once more, if mind-uploading technology is the end goal, will it be a technology that is readily available to all populations and people throughout the world, or will it only be for a select few already living at the top? While the latter question is unanswerable at present, and is only one facet of where technology becomes exclusionary, both questions are an attempt to start a discourse around these issues before they become truly problematic. As these questions come into being, it is incredibly important to focus on attacking the problems we already know exist to prevent the continuation of widespread suffering throughout the world. Although we can certainly use a highly-advanced technological tool to help solve our problems in the future, that does not mean that those problems should be kicked down the road until we get there.

Furthermore, we should actively be finding ways in which we can promote widespread access and understanding of emerging technologies across the globe. One’s locality of birth should by no means be a barrier to entry into the advancing technological world, because as we continue to develop these tools, those left behind will only continue falling behind at an increasing rate. The singularity, though interesting in concept, “is more useful as a tool for thinking about different models of social disruption by technologies, than it is as a concrete program that we can plan for,” said Dr. Brunton. While we can use the singularity to theorize about how we may eventually be able to attain a technological afterlife, it should instead be used as a case study for viewing how these technologies will holistically impact society.

The extropians are a fascinating group with wide-ranging societal impacts. But, even though their ideals are flashy and enticing, the dire consequences of what the deployment of their ideas would entail should be a warning sign for how we should approach technological advancement. In the end, human advancement is only as beneficial as it is for the least represented and least fortunate people on Earth. As we promote unchecked technological growth, we must not neglect our duties as humans to push for the success, wellness, and happiness of all. In relation to the extropian’s plans to universally and perpetually expand knowledge, as Dr. Brunton so concisely put it: “Ultimately, these questions come down to what do you believe? About the future, about human nature, whether what is really significant about us is our ability to care for and protect each other, or our ability to be intelligent in ways that could seed future intelligence in the universe.”

Resources:

[1] https://hpluspedia.org/wiki/Extropy_Magazines

[2] Brunton, F. (2020). Digital cash: The unknown history of the anarchists, Utopians, and technologists who built cryptocurrency. Princeton University Press.

[3] https://frc.ri.cmu.edu/~hpm/book98/com.ch1/vinge.singularity.html

[4] https://futurism.com/kurzweil-claims-that-the-singularity-will-happen-by-2045

[5] https://philpapers.org/rec/MORRMM

[6] https://philpapers.org/archive/MLLFPI.pdf

--

--