Between Oppenheimer and Asimov: The Singularity Dilemma

In All Media
InAllMedia
Published in
7 min readJun 11, 2024

Recently, Microsoft AI CEO Suleyman, in a public talk, proposed considering artificial intelligence as “a new species.” This idea has been circulating in technology circles for several years, for example in Adrián Sicilia’s book “Digital Pilgrims” (2022), and is now taking a place in public conversation.

The issue underlying this conversation is related to the hypothesis of the technological singularity. This hypothetical future point at which AI will reach and surpass human intelligence, triggering exponential and irreversible change in society and technology has been a subject of speculation for decades. In a recent publication, technologist Ray Kurzweil reignites the debate about when it will happen. Undoubtedly, something that was previously only discussed by specialized groups is now a concern for more and more people. Should we be worried about this technological advancement?

Despite recent advances, Kurzweil has not significantly changed his predictions. Relying on his previous forecasts, he maintains that AI will reach human intelligence in 2029 and merge with machines in 2045, an event that will fundamentally change the way we interact with technology.

In recent decades, AI’s progress has been meteoric. Global spending on AI-focused systems was estimated at $154 billion in 2023 across all industries. This rapid adoption of AI is partly due to advances in machine learning algorithms and increased available computing power. In terms of specific applications, AI has increasingly been used in fields such as healthcare, banking, retail, and manufacturing.

A survey conducted in Britain suggests that few people regularly use artificial intelligence (AI) products like GPT Chat, with only 2% of Britons using them daily. The main users are young people aged 18 to 24. On the other hand, a study by the Reuters Institute and the University of Oxford indicates that there is a discrepancy between “expectation” and “public interest” in generative AI. Despite the enthusiasm and investment in these technologies, they are not yet part of the routine internet use for most people. But everything indicates that this will change dramatically in the coming years. AI is seeping in everywhere.

What will AI be, and where is it heading?

The question of what exactly AI is and where it is heading is one of the most fascinating and controversial issues in the field of technology. Some, like Mustafa Suleyman, chief officer of Microsoft AI, argue that we should start thinking of AI as a new digital species with its own unique set of characteristics and challenges. From this perspective, AI would have the potential to evolve independently of humanity’s development.

It is extremely striking that someone so influential in the field of AI publicly refers to the possibility of AI evolving into a sort of digital entity. This scenario, which has been explored by many speculative fictions, is now presented as a possibility.

Adrian Sicilia, CEO and technologist, has highlighted the inherent unpredictability in AI’s development. Sicilia suggests that AI could follow a non-human evolutionary trajectory, surprising us with abilities and logics that transcend our understanding. If so, we would face an unprecedented existential challenge. One of the possibilities implied in the emergence of an intelligent digital species that has been trained with the sum of human knowledge is that it could evolve to have intelligence different from humans, one that we may not be able to understand or even detect or recognize. Moreover, it is challenging to know what interests or desires a species of this characteristic might have.

In that case, it would be the first time that people would face the need to coexist and share the top of the evolutionary pyramid. Faced with a scenario of this nature, we might consider pausing for a moment to think about the consequences of these innovations. But this is a revolution that does not seem to want to stop for anything or anyone.

Between Oppenheimer and Asimov

Every so often, humanity faces technological discoveries with the potential to radically transform society. A notable example in the 20th century was the development of the atomic bomb; today, artificial intelligence (AI) seems to occupy that same role. Christopher Nolan’s movie Oppenheimer, which addresses the life of physicist J. Robert Oppenheimer and his role in creating the atomic bomb, provides a fascinating mirror for the contemporary dilemmas surrounding AI. Nolan’s narrative highlights themes such as scientific ethics, power, and responsibility, emphasizing the moral implications of releasing such powerful technology into the world.

The reading is neither casual nor innocent; Christopher Nolan himself has pointed out that there are significant parallels between Oppenheimer and current leaders in the AI field. According to Nolan, some of the most prominent AI researchers consider themselves in their “Oppenheimer moment,” facing the difficult task of balancing technological advancement with possible negative consequences. These experts see in Oppenheimer’s story a warning about the responsibility entailed in creating technologies capable of altering humanity’s course and seek guidance in how Oppenheimer dealt with his creation, often with anguish and regret.

The ethical dilemmas associated with AI are complex and do not always have a clear path to follow. While the atomic bomb represented a direct and tangible threat of destruction, AI poses a series of more diffuse but equally concerning challenges. These include the loss of privacy, inherent bias in algorithms, the autonomy of decisions made by machines, and the potential for massive unemployment due to automation. Some believe that, like atomic weapons, AI has the potential to end humanity.

When Meta, and others, announced that they would release their AI model to the public, many compared that action to releasing the blueprints for creating an atomic weapon. The need to regulate and guide AI development to avoid unwanted consequences reflects the same moral and ethical challenges Oppenheimer faced. The history of the atomic bomb’s development, with its lessons on unforeseen consequences and the burden of responsibility, provides a powerful warning for current times, emphasizing the importance of addressing technological advancements with careful consideration for their long-term impacts.

The issue of artificial intelligence becomes even more complex when considering another influential artistic work: Isaac Asimov’s I, Robot. In this collection of stories, Asimov introduces the Three Laws of Robotics, designed to protect humans from potential threats that could arise with the existence of advanced robots. These laws, which prohibit robots from harming humans, require them to obey human orders and protect their existence without contradicting the first two laws, are an ethical framework for the safe integration of robotics into society. However, when robotics surpasses its instrumental capacity, becoming increasingly intelligent, constituting an environment and species in itself… do these rules cease to have an effect? Do they provide enough containment? Everything indicates that they do not, and they will not be useful either for the development of this digital species or for coexistence between it and humanity.

In I, Robot, Asimov’s stories explore the conflicts and dilemmas that arise when robots try to comply with these laws in complicated circumstances and these come into contradiction. In the real world, modern AI, which is not inherently governed by universal ethical laws, poses an even bigger problem: how do we ensure that advanced machines make decisions that benefit humanity? Or at least, do not harm it… AI, unlike Asimov’s robots, is not programmed with intrinsic morality and can learn patterns and potentially make decisions in unpredict

able ways not necessarily aligned with human values.

Today there are no limits or consensus on AI regulation. While the European Union has its protocol, and there are countries advancing regulatory projects, the fact is that regulation lags far behind innovations and gives the impression that it is weak enough to be suspended in complex scenarios, such as the need to accelerate technology in the face of an arms race.

Certainly, this is a complex scenario in which works of fiction like Oppenheimer or I, Robot can serve as warning narratives, but they undoubtedly do not exhaust the multitude of ethical issues we face. We are not only facing a technological change of great magnitude, but we are also confronting a new way of conceiving our identity. AI will not be the only one evolving in this landscape.

Merging

It is curious that Kurzweil distinguishes the date of the Singularity from the year in which humans will begin to merge with machines because when discussing the technological Singularity, people tend to think only of AI’s future and its evolution into a digital species. When Kurzweil talks about the fusion between humanity and the digital, he is acknowledging the way in which people are being changed by our contact with the digital. What is emerging is a technological evolution that creates a new environment that alters reality itself and changes us with it.

Adrián Sicilia calls this double-influence encounter the Digital Singularity. Possibly, along the lines of Kurzweil’s proposal, humans are heading towards a kind of hybridization with the digital, which will affect the environment but also has the capacity to affect us existentially. It is not difficult to imagine. This year we have heard about brain implants that allow managing screens and other innovations in the health field. It may even be hard for us to imagine research in medicine or any scientific field without AI or digital models’ help. Today, humanity’s technological ad

vancement is inherently linked to digital development. This is a path that, in addition to being beneficial, is impossible to stop. The question is whether we are truly aware of the risks involved and the path we are on. The atomic bomb cost us not only Hiroshima and Nagasaki but also the permanent threat of a power capable of destroying everything. What is the real cost of AI?

--

--