The Singularity: Truth Or Fiction?

Sam Vervaeck
Train of Thought
Published in
10 min readFeb 22, 2018

We will soon create intelligences greater than our own. When this happens, human history will have reached a kind of singularity, an intellectual transition as impenetrable as the knotted space-time at the center of a black hole, and the world will pass far beyond our understanding.

The words spoken by Vector Vigne in 1993 are perhaps the best way to introduce what it is people talk about when they have something to say about the singularity. It is an entirely hypothetical event, that according to some will fall within the lifetime of most of us, while others believe it will never happen or take hundreds of years. These people, who tend to hold some radical notions about being and what is human, believe in a future where minds can be “uploaded” to computers, where one person can have multiple bodies or “hosts”, and concepts like a “hive mind” are no long just ideas that are used to create a good science fiction story. But what is it exactly that unites these people in their beliefs? And how can we ever be sure that these predictions really are true? In this article, we explore the world views of some very remarkable individuals, and come to the conclusion that, while it is impossible to predict the future, they could be more right than you might be comfortable with.

Photo © Shutterstock

About two years ago, I learned about a mysterious thing some people referred to as the technological singularity. It was a sheer coincidence, as I happened to stumble upon a random article while looking for some ideas for a short science fiction story one Sunday afternoon. What I found almost couldn’t compete with the things that were in my own imagination. On a poorly designed website and in between some random pictures of brains and cyborgs (yeah, don’t ask me how I got there), there were some predictions of a future that I deemed too impossible to be true. Yet my curiosity took the best of me, so I continued. It turned out the ideas on this strange and awkward website were not so rare as I first thought, and are actually very widespread on the Internet. The thing I was most surprised to learn is that the very same ideas have been discussed by some major thinkers.

The idea of a technological singularity is not that difficult to understand. The basic premise is that evolution favours intelligent systems over unintelligent ones, because intelligence allows a preciser control over the environment, and, thus, increases chances of survival. This description does not embody the full complexity of it, but it gives you a rough idea as to what we’re dealing with. First and foremost, believers in a technological singularity start from the observation that the overall complexity of our knowledge, skills and technology has increased over the centuries. This cannot be denied. Secondly, they expect this trend will continue. True, in a world where we suddenly might able to see autonomous vehicles on the streets, and rockets being launched into space by commercial companies, it is perhaps not completely ridiculous to accept this argument as a given. It cannot be proven, but unless a cataclysmic event happens, such as a global financial meltdown, a meteor strike, or a nuclear war, chances are that science and technology will continue to grow as it has always done. However, it is the next step in the line of reasoning of the “singularists” where gets interesting.

People who believe in a technological singularity are convinced that our knowledge will increase exponentially within the next couple of decades, rather than linearly. If you are a bit confused about what I mean, allow me to rephrase it differently. These people not only believe that science and technology has “grown” and will continue “grow”, but also will “grow to grow”. Their reasoning can be summarised by two arguments: the history of scientific and technological development fits the exponential curve, as depicted above, and intelligence will ultimately breed intelligence. It will become what system theorists like to call a feedback loop, a kind of construct where the result, the output, is connected with the previous step, the input.

Intelligence will ultimately breed intelligence

To give you an illustration: imagine you get so clever that your are able to make yourself more clever by synthesising some kind of drug. The drug increases your intelligence, which makes you realise you can enhance the drug to make you even more intelligent. You quickly start to create this new “smart pill” and, as expected, it works! Now that you are even more intelligent, you realise that this pill is just a small step compared to what still is possible, and you set out to create even more and better pills. As your intelligence grows, one insight follows the next one, enabling you to come up with radical new ways of performing science and engineering, which are so advanced that people living today wouldn’t even understand what the purpose of it is. What happened? An intelligence explosion. The accumulation of knowledge goes faster and faster, because “friction” is increasingly removed from your ability to process information. The result is that your intelligence increases exponentially, and according to a lot of people, this is what will be happening in a near future. This leads me to the following question: if we are determined to evolve to increasing intelligence, does that also mean we have to embrace it as our ultimate goal?

Systems theory: are we destined to move towards a technological singularity? If so, do we have to? (Source)

The Rise of The Machine

When I started studying computer science a couple of years ago, artificial intelligence (AI) was something of little interest to people that were not directly researching this most interesting field. Fast-forward a couple of years, and suddenly the word “AI” pops up almost weekly on media outlets. All of a sudden, artificial intelligence, which has been an academic discipline for decades, is more and more becoming a central part in society. But what exactly is AI, and why is a field that initially was created out of purely intellectual interest so suddenly becoming a real threat for our jobs? Some people expect that AI will increasingly become more and more important, until society is drenched in it.

It is safe to say that AI began in the 1950s, when a group of academics decided to pursue the goal of building a computer program that is able to perform tasks at the same level of human intelligence. Marvin Minsky, Allen Newell and Herbert A. Simon, together with John McCarthy, are considered to be the “founding fathers” of artificial intelligence. It is not to difficult to see why. McCarthy was the one to introduce the term “artificial intelligence” in a conference of the summer of 1956, thus giving the field a name. From there on, development of AI continued in periods of great progress and periods where not much progress seemed to be made. Sometimes, these periods are referred to as “summers” and “winters”, and right now, it is very clear we are living in a “summer of AI”.

Artificial intelligence, as an academic discipline, is an incredibly broad field. It borrows ideas from psychology, cognitive science, neurology, mathematics, computer science, biology, and possibly other fields, with the unifying goal of making computers more intelligent. In that sense (and only in that sense), it could be said that its goal is to make computers more human-like. Actual sub-fields include natural language processing (NLP), computer vision and machine learning, the latter of which has gained an incredible traction in the past few years. Machine learning is becoming the dominant method for creating AI systems, and it is almost impossible to think of today’s world as going without it. Models based on machine learning can be found in autonomous vehicles, in image classification algorithms such as face recognition software, and plenty of other applications.

One of the typical methods that are used to design a machine learning algorithm is by using an artificial neural network. In its most simple form, several rows of nodes are placed on a grid. The nodes in one row are connected to the nodes in the next row, in such a way that each node in one row is connected with each node in the second. A mathematical function determines, using a certain parameter that is unique to the connection or the node, whether a signal can travel from one node to the next. As you might suspect, you can create a neural network in many different forms or variants. This is why neural networks are a very broad category of models. One particular variant, which in and by itself sprouts many subtypes, is called deep learning. Chances are you already heard of this buzzword that has spread like a wildfire through the industry. It is just one type of neural network suited for specific tasks, but has proven to be incredibly powerful.

A typical artificial neural network (ANN) — Wikipedia

Another type of “algorithm” that might give you an idea of what goes through an AI researcher’s mind is a genetic algorithm. Drawn from evolutionary theory, this method seeks to optimise a program’s performance by making small random modifications to it, in much the same way that small random modifications to our DNA are the root cause of evolution (hence its name). It forces a certain computer program to finish some tests, measures its score, and then repeats this process with a slightly different version of the program. Versions of the program that do not perform better are discarded; versions that do are used in the next iteration. If this process is repeated enough times, chances are that you have an artificial intelligent program that is quite good at its task.

An example of a kind of DNA mutation (Source)

Coming To Your Neighbourhood?

If you just could imagine the kind of revolution that deep learning would set in motion a few years ago, chances ar wouldn’t have believed me. Yet it is happening, and it is just the start. Granted, the basic technique is actually quite old, stemming from research done in the 80s, but now it has found some traction, it will start to evolve at a much faster pace. Above that, people are genuinely concerned that AI will get this intelligent in the foreseeable future that it could mean the end of our species. You believe it or you don’t, but if a world-renowned scientist and group of experts in the field believe it is possible, I think it is at least worth mentioning.

But what does all of this imply for us? Some will say “yay, new gadgets and technology!”, but others aren’t so excited. I’m one of them. I think that this technology, when realised, has the potential to severely damage our morals and ethical values if we don’t watch out. Above that, I believe it could create a new kind of chaos, a genuine and profound confusion about what it means to be human and to be alive. In much the same way that humanity had to change its view of the world after discovering the Earth is not the centre of the universe, I think that these new technologies are at risk of making us lose our own identity. And history teaches us that such a loss is a very bad thing to happen.

I believe a technological singularity could create a new kind of chaos, a genuine and profound confusion about what it means to be human and to be alive

The following thought experiment is rather absurd, but what if we imagine for a moment a person who has lost all touch with reality, and just lives a digital life, immersed in a reality that is fabricated by a machine learning algorithm, powered by some kind of pseudo-random number generator? Does his life still make sense? Does he still have a point of existing, or will we stop caring about such questions and deem life meaningless? Not entirely coincidentally, these questions boil down to debates that philosophers are having for decades, if not centuries: does technology really make the world a better place? What is reality? And what makes me really me? I go more deeply into these subjects in my previous post.

If it is true that thanks to the technological singularity everything becomes possible, where do we draw the line? Where do we want to go to? What do we consider to be human and what not? These are no easy questions, yet I think it is very important to ask them and try to answer them. If not now, when? Who is going to get access to all of this technology? Who is going to control it? Do we put all of the responsibility with the industry’s executives who will come to distribute it? Are we going to ask our government to regulate it? To me, neither of these two options sound like a good plan, but I don’t have an alternative solution, either.

Of course, none of this will happen if there aren’t people actively working towards it. But the truth is people are. In fact, I believe a global capitalist system driven by competition will always be gearing towards it, because companies need to become more and more intelligent as they try to get a grip on the global market. This article in the New York Times illustrates the point I’m trying to make. And if the majority of good people do not make it happen, some bad guys will. I’m not sure that’s what we want. So to me, the question is not if we should avoid reaching a technological singularity, but how our thoughts, decisions and actions will shape it. How will the future look? Nobody can tell. But I believe by actively talking about it and writing about it, we can avoid disasters from happening.

Sam Vervaeck is a freelance writer living in Belgium, trying to find his way in life while exploring various philosophical questions. He loves programming, playing piano, and martial arts. He is in the process of writing a book about artificial intelligence and the future of society, which will be available on his website.

--

--

Sam Vervaeck
Train of Thought

Just some guy trying to find his way through life. Very interested in philosophy, in the future of society and how emerging technologies might impact our lives.