SkyNet Journeys [1]: When will AI take over the world already?

Elisha Rosensweig
14 min readJul 15, 2024

--

Like many data scientists these days, I’ve been spending a lot of time thinking about AI, where it’s going and how to best address it. In the following posts, which will come out every few weeks, I’ll be trying to put my thoughts on the page, and share them with you.

Much of my thoughts on the topic surround the question of Intelligence: is AI intelligent? Can it “think”? Do the answers to these question matter in practice? My intuition starting out was that they do, very much, and I plan on demonstrating this and in what way they shape what we research and build as the posts progress.

My approach is going to be to start from the basics and work my way up. I hope you enjoy it — and please feel free to comment on anything and everything you see here. AI is an evolving field and one that has been around for 70+ years, so I’m sure there is a ton that I’m missing.

That’s it — enjoy!

In 1971, Klaus Schwab, a Swiss-German businessman and engineer, founded what would become the World Economic Forum (WEF), an organization that brings together many of the world’s largest companies and investors. Once a year, the Forum’s members convene in Davos, Switzerland, to discuss global trends in business and technology. In 2017 Klaus, who still headed the organization until very recently, interviewed Sergey Brin, one of the two founders of Google, and at the time still president of that famous company. Their conversation lasted about half an hour, and revolved around questions of technology, society, and the future of humanity in general.

Schwab & Brin at the WEF, 2017

Now, it is important to understand that the WEF arouses discomfort in many people around the world for all sorts of reasons, and one of them is the feeling that it is a forum that gathers together a lot of people with a lot of power, power that can even challenge many sovereign countries. The concern of some of the Forum’s critics is that behind closed doors, all manner of powerful economic planners are plotting how to run the world, without having been elected by the common citizenry like you and me. Consequently, every so often there are reports about the Forum, which are probably a combination of credible facts mixed in with conspiracy theories, and for those not deeply immersed in the details it is difficult to separate the truly problematic elements from mere inflated concerns of people who are not part of the in-crowd. And so, let us take as an example the interview of Schwab and Brin. If you search the internet you will find this interview associated with titles such as “Klaus Schwab seeks to abolish democracy and let computers rule the world!”

Was this what was discussed there? Here is the passage from the disputed conversation that gave rise to the tempest:

Schwab: One fear which I have heard is that, technology now, digital technologies now have mainly an analytical power, and we go into predictive power, we have seen the first examples and your company is very much involved in it. But then the next step could be to go into prescriptive mode, which means you don’t even have to have elections anymore but you can already predict what— and afterwards you can say “why do we need elections?” because we know what the result will be. Can you imagine such a world?

Brin: Well, you might then further ask “why do we need to have elected leaders at all” because then you might as well have all the decisions made. I think that you’re venturing into profound questions. I think you can also ask — what will we actually want? We have a set of values and desires that are probably pretty different compared to before the industrial revolution and different still compared to before the agrarian revolution, and we might continue to evolve. Many of us today participate in the global economy… some of us choose to be Buddhist monks and seek enlightenment through spirituality. So people have different ways of evolving and finding meaning, and it could be that the way we look at it a hundred years from now will be so different than how we look at it today that it’s almost unrecognizable… we won’t even be able to translate.

Well — what do you think? What is actually being said here? Personally, I think we need to distinguish between the text and the subtext. On the textual level, no one is saying that democratic elections should be abolished. The words were spoken more in the manner of a flowing conversation, and Schwab explicitly presents it as a concern of some people and not as a view he endorses. Similarly, Brin does not really answer Schwab’s question. Read again and you will see — Brin does not give a definitive answer, and Schwab does not press him further.

But the subtext seems to trend in the opposite direction. For Sergey Brin, in his convoluted answer, suggests that human existence might evolve in such a way that perhaps people will no longer be interested in economics or politics any more. Perhaps they will be more interested in spirituality, in spiritual release. And the subtext of this, at least as I understand it, is that yes, in the future perhaps all the great things in the world, the “technicalities” of day to day life, will be run by computers, and we humans will be free to pursue our private little concerns, be free to meditate all day long because all the big things — politics, society, economics — will be taken care of by the super-smart computers we have built. And since Schwab does not challenge him on this point, does not ask him to sharpen his take on the subject — I take it that he feels comfortable with such a vision, even if he does not explicitly state it.

Now, you may agree with my interpretation, or you may interpret otherwise —it’s always fun to mess around with conspiracy theories, isn’t it? But I want to focus on the question itself — will there really be a point in time where there will be no need for democratic elections, because the computers we will have built will be so smart that there is nothing a human does that they cannot do as good, or even better?

Such an idea is, of course, not new, and science fiction literature is replete with stories that play with it. Some have seen in such a world a dystopia, and some an utopia. Among those who have suggested, through their writings, that such a future may be desirable or at least unavoidable, was the famous author Isaac Asimov. In the final chapter of his 1950 novel “I, Robot”, entitled “The Evitable Conflict,” a world is described in which the super-intelligent computer systems have gone global and humanity has handed them control of the entire world’s production in order to optimize the welfare of the human race. The story tells of two high-ranking characters — Stephen and Susan — who detect anomalies in the instructions that the computer systems are issuing, things that seem to be an error at first glance. Such sophisticated machines should never make mistakes, and so the story’s protagonists set out to investigate the matter, to get to the bottom of things.

In the end, they conclude that the machines were not in error — what appeared to be an error was in fact a premeditated deviation, designed to generate internal bureaucratic processes that would result in the firing or transfer of certain people at key points in the production process. The machines wanted to neutralize the power of these people, for they were part of a movement called the “Society for Humanity” which called for the curtailment of the power of machines in the world. In Asimov’s imaginary world, the machines are programmed to help humanity, and they had deduced that the good of humanity is that they should run the whole world — and so they orchestrated a few small domino-effects to neutralize the danger from those who oppose machine domination.

Having grasped that this is the way things are, the story ends with the following dialogue:

“Stephen, how do we know what the ultimate good of Humanity will entail? We haven’t at our disposal the infinite factors that the Machine has at its! Perhaps, to give you a not unfamiliar example, our entire technical civilization has created more unhappiness and misery than it has removed. Perhaps an agrarian or pastoral civilization, with less culture and less people would be better. If so, the Machines must move in that direction, preferably without telling us, since in our ignorant prejudices we only know that what we are used to, is good — and we would then fight change. Or perhaps a complete urbanization, or a completely caste-ridden society, or complete anarchy, is the answer. We don’t know. Only the Machines know, and they are going there and taking us with them.”

“But you are telling me, Susan, that the ‘Society for Humanity’ is right; and that Mankind has lost its own say in its future.”

“It never had any, really. It was always at the mercy of economic and sociological forces it did not understand — at the whims of climate, and the fortunes of war. Now the Machines understand them; and no one can stop them, since the Machines will deal with them as they are dealing with the Society, — having, as they do, the greatest of weapons at their disposal, the absolute control of our economy.”

“How horrible!”

“Perhaps how wonderful! Think, that for all time, all conflicts are finally evitable. Only the Machines, from now on, are inevitable!” And the fire behind the quartz went out and only a curl of smoke was left to indicate its place.

Now, I am curious — how do you feel about this vision? Does the idea of a world without (the need for) political systems, a world in which everything is automatically run from above with fairness and efficiency, appeal to you, given that it comes with the concomitant loss of our freedom to chart our own course? Or perhaps such a story keeps you awake at night? Let’s continue as you mull over this in the back of your mind.

In 2017, when the aforementioned conversation took place between Schwab and Brin, advanced computing systems were already ubiquitous, and many boasted that they were a form of “Artificial Intelligence”. (We will discuss during the series what “artificial intelligence” means in the current sense, but for now, you can simply imagine these as highly sophisticated systems with capabilities far beyond those of a regular computer program.) Cellular communication, automotive safety systems, even the Netflix algorithm for proposing what to watch next — these and others were already swirling around us then. But the man on the street could not imagine that Asimov’s future could be just around the corner, since all these systems were buried deep within highly technological systems. Like an engine that we all know is there but do not bother to open the hood to see it, the AI was there but only the professionals got to touch it directly.

And then, at the end of 2022, GPT-3 emerged into our world, known to the public as ChatGPT. (Note: I was sure that everyone had already touched this tool by now, but just a few days ago I discovered that someone close to me had never tried it. Of course I immediately put her into the device, and made sure she played with it a bit — as if, how could one not? So if there is anyone reading this who has never played with one of these new devices, like GPT or Midjourney and so on — pause, and come back after you’ve played with it a bit. Even a few minutes using these tools, which are available to anyone today with a simple keystroke, will help you understand to a much greater degree what all the hype is about and give you a better intuition for what these posts are about).

In any event, this charming chatbot was a hit that stunned the world. Even people within the software industry were amazed by the impressive language capabilities that this system displayed. It was not that they did not know that progress was being made in the field — a very early version called GPT-1 had been released in 2018, for example — but most people thought it would still take years to reach the level that GPT-3 had attained. Overnight, all the big companies and all the high-tech entrepreneurs realized that something new had arrived on the scene, for it became clear that there were things that we had not thought possible to realize today, that were right at hand. The rest, as is well known, is history — people and companies have since cloned and upgraded GPT’s success in other areas as well, such as the creation of images, videos, and more and more, revolutionary products some of which will change life around the world in the coming years.

What made GPT so impressive, so compelling, was a combination of two elements. The first was, of course, its flexibility and versatility. Its ability to both compose a limerick for the birthday of a nephew, and to answer questions on the history of Mesopotamia, made it the go-to tool for anything you might want to find online. But the second ingredient was even more important, and that was that GPT was the first time that a technology that you could talk to like a human being — and that understood everything you were trying to say in a normal, conversational way — became available to the whole world. In all science fiction movies, one sees people conversing with robots and computers in ordinary language, and the robot or computer understands and acts accordingly. For the first time, we could all imagine that something like this is not just science fiction, but a goal to be achieved in our own lifetimes, even within this decade.

The combination of a vast store of knowledge — say, all the knowledge that is stored in the Internet — with a natural human interface, suddenly brought to the minds of many people the enormous potential that this technology holds, a potential that can be used for good and for evil. Under the “evil” category was not only the danger that “bad people” and “bad countries”, so to speak, would use these tools to harm us, say by flooding social media with fake news, but something beyond that — that the technology itself would see us as an enemy or at least as an inferior being to be controlled, with or without our knowledge. One who was deeply concerned about such a dystopia was, for example, the famous author Dr. Yuval Noah Harari, who in an article for the Israeli online paper YNet wrote thus:

The artificial intelligence tools that have appeared in recent years threaten the survival of human civilization from an unexpected direction. Tools such as ChatGPT have developed extraordinary abilities to understand and create language through words, sounds, or images. Artificial intelligence has cracked the code of the human operating system… What will happen when non-human intelligence is better than the average human at telling stories, composing music, painting pictures, and writing laws, and knows how to exploit human weaknesses and addictions better than any human?

So it seems that the future is already here — that Asimov’s vision has not yet been realized in us, but that we have here a thick hint that it could be if we so desired. The currently available tools are not yet strong enough to be considered “superintelligence”, probably, but the various tech companies are working intensively to strengthen these tools, and all manner of researchers in the field are predicting that unless we take precautions, the day is not far off when we will indeed create such a superintelligence. And on that day, what would prevent it from beginning to manipulate the world behind the scenes, in a way that none of us would even be aware that it was doing so? And who knows what its motivation would be on that day? Would it be an ally, or a foe?

Harari is one of those who is at the far end of the spectrum in terms of the level of concern he has about these developments. There are computer scientists, like Yann LeCun, who believe that Harari is overstating the case, and that we have the knowledge and the ability to control these sophisticated systems. But of course even more relaxed individuals like Yann agree that they are powerful tools, capable of doing much damage through such things as fake news and the like, and certainly we need to be careful about which systems we give them access to.

But what intrigued me when I first became interested in this discussion was the very name given to the whole area: Artificial Intelligence. Many of the researchers who are leading the AI revolution speak freely of it as an “intelligence”, a “mind”, and emphasize the technological progress that is enabling these programs to become, slowly but surely, smarter than humans. Humans are losing to such programs at chess and at the more complex game of Go, and these are only the more famous examples of how gradually man is losing his status as the smartest creature on earth. In a post I read online, it was written that when GPT4 was administered an IQ test, it scored 155, which on the accepted scale means that it is considered equivalent to an “extremely gifted” human. And what will be with GPT5? or GPT15?

And yet, despite these impressive achievements, something bothered me about this. I wanted to know — are these machines really “intelligent”, or is it all just a label that is being used because it is cool?

In the upcoming posts we will explore this question in depth. For it turns out that whether or not it is intelligence or not has quite broad implications on how we approach the use of them and their promotion in society. For instance, Harari, in the same article in Ynet, believes that these tools are actually intelligent. “We have just encountered an alien intelligence, right here on Earth”, he declares emphatically. “We know little about this intelligence, except that it may destroy our civilization”. He seems to think of artificial intelligence as an extraterrestrial of supreme technological prowess who has landed here on our little blue globe, and therefore there is much to be concerned about. On the other hand, if it is not a matter of intelligence, but only a tool we control that has gotten a power boost, then perhaps things are less alarming, and will not lead to the social and political upheaval that he fears.

This series is a work-in-progress. But as I see it now, I’d like to start to build the case of those who proclaim definitively that it is an intelligence of sort. Having established their position— and they have very persuasive arguments, I must say — I will present the matter from the opposite angle, and leave you, the listeners, to judge.

Perhaps the first person to ask the question “Can machines think?” in an academic setting was Alan Turing, and if he was not the first, he was certainly the most famous. Named after him is the famous Turing Test, which was published, surprisingly, in the same year as Asimov’s book, 1950. This test was designed to offer a way of settling the question we posed — can machines think? So it seems to me that the best place to start is there, in the fevered brain of that fascinating progenitor of computer science. So join me here next time where we delve into the story of Turing, his test and its implications, which still impact AI research till this day.

--

--