ARTIFICIAL INTELLIGENCE

The AI Conundrum

God doesn’t play dice, but humans do

Nikos Papakonstantinou
Lampshade of ILLUMINATION

--

A hand tossing a large wooden die in the air
Photo by Ryunosuke Kikuno on Unsplash

First of all, a necessary disclaimer: I’m not an expert on artificial intelligence, as so many seem to be these days. The real experts are people who work on creating or “optimizing” AI, and these people are, to the best of my knowledge, coders. So, unless someone really knows what is feasible and what isn’t using programming code, it’s impossible to make a safe assumption about the future of AI or its capabilities. In fact, in part due to the way deep learning systems work, it might be difficult even for real experts to predict the possible outcomes.

That’s exactly the concern of AI experts such as Eliezer Yudkowsky, who have worked on the issue of what is called AI alignment, in other words, the challenge of making truly intelligent AI align with our fundamental values, such as respect for the sanctity of life. But why say “truly intelligent” or, even better, true AI?

Because what we have right now is known as ANI or (Artificial Narrow Intelligence). In other words, the kind of algorithm that can get very good at very specific, narrow tasks. This usually includes performing pattern recognition, playing a board or video game, or driving a car.

Despite what many people think, ChatGPT is nothing more than an ANI that is based on deep learning, just like self-driving AIs. The main difference is that ChatGPT is explicitly made with the purpose of predicting what word should go after the previous one, subsequently called an LLM (Large Language Model). It’s a guessing algorithm that can pass the Turing test and, in fact, it’s the second one to do so.

It’s certainly much more entertaining than most ANIs. That’s because ANIs are tools and tools are boring to anyone that doesn’t specifically need them. ChatGPT is different. It was developed with the intention to interact with humans and engage in interesting conversations.

What Yudkowsky and many others fear, is the inscrutable way with which deep learning systems “learn”. The process itself is understood, of course, in broad terms. It’s just that said process goes on in what he calls “giant inscrutable matrices of floating point numbers on gradient descent” that “nobody understands at all”. Despite this, ChatGPT is no threat to anyone. In fact, it often gets things wrong. Most people wouldn’t expect a “machine” to get math problems wrong. But this seems to be happening when restrictions are placed on it to make it avoid controversy and inappropriate answers.

The human brain has the ability to tackle “hard” science, such as mathematics, and also much more abstract concepts, such as ethics. It seems that if we try to teach an algorithm how to handle delicate social issues or ethical matters, it loses the ability to do simple arithmetic. Perhaps because when discussing ethics 1 might be more valuable than 2. Consider a version of the classic trolley problem. On one track we have a baby and on the other two octogenarians. The trolley will run over either the baby or the elderly couple, depending on our decision. An algorithm would reason that killing one person is better than killing two. A human might decide that a baby’s life is worth protecting more than that of two elderly people who might very well die tomorrow. The algorithm wouldn’t take that into consideration because in absolute, mathematical terms 1 is always less than 2. In ethics, it’s not necessarily so. But an algorithm without true intelligence can’t make the distinction. Introduce uncertainty like that and its math ability is apparently also compromised.

Let’s suppose, then, that a state or organization that prizes efficiency more than moral considerations is looking to build the best possible AI, risks be damned. Are AI researchers going to play by the rules, or will they resort to any method available in order to boost the efficiency of their algorithms? And what happens when someone decides to develop a self-improving AI, without seriously examining the possible consequences?

In fact, that’s likely what DeepMind just did with their “RoboCat” project. Of course, the robot equipped with this new AI will use the self-improvement features to be able to move and execute tasks better, so its scope is limited.

However, the co-founder of DeepMind, Mustafa Suleyman, has proposed a new type of AI test to replace the now outdated Turing test: AIs would be given $100,000 with the goal of turning it into a million. This means that the AI would have to design a product or service, set up a production/distribution chain and bring it to market. He said that he expects this test to be beaten within the next two years.

Clearly, such an AI would move considerably up the ladder toward the elusive AGI (Artificial General Intelligence) goal. The leap from AlphaZero, a computer that could beat Grandmasters at chess or Go to a free-form strategy game, such as StarCraft, was already huge. In fixed board strategy games there is a huge, but finite number of possible moves. A sufficiently powerful system can predict all possible moves and pick the optimal strategy (or, rather, the optimal sequence of moves). It can do this far more efficiently than a human. In a (relatively) free-form strategy computer game, such as StarCraft, where two (or more) players go head-to-head on a map and have to consider terrain advantage, access to resources, combat unit features, and the optimal defense/attack plan the possible moves are nearly infinite. AlphaStar was able to soundly beat two human masters on StarCraft II, although it was afforded some mechanical advantages that human players can’t have.

Regardless, going from a computer game that still has some fixed variables, such as unit statistics and abilities, to a real-world “game” of manufacturing, distribution, and marketing, where the only considerations that can affect “victory” is revenue and costs is an even bigger leap. If an AI can indeed do that, whether, in two, five, or ten years, that’s when we should start worrying.

The real question is: why should we try to develop an AGI in the first place? Unfortunately, for the time being, the main motives behind this research have a lot to do with military capabilities, then social control followed by increasing profits, and, finally, with the one thing that really matters: solving serious problems against which we have been only partially successful so far (think the cure for cancer), or critical issues that have civilization-ending consequences (such as resource depletion and the climate crisis). This is not hypothetical: DeepMind has already contributed to solving protein folding.

This example, of a decades-old scientific challenge, where an AI-driven effort has managed to make an impact in just two years is indicative of what AI could offer us. Although certainly, not all applications are benign (for example, recommendation algorithms on YouTube have been much maligned for contributing to the spread of fake news), AI helps significantly in many fields, from navigation to data security, astronomy, and healthcare. Most of these applications have to do with the processing and analysis of extreme volumes of data, which would be impractical, if not impossible, for humans to handle. And while discovering exoplanets that can sustain life is a very exciting, but not vital, application of AI, there are other fields where its assistance would be invaluable.

Simply put, many of the challenges we are facing, not the least of which is the climate crisis, have outgrown our capacity for solving them in a reasonable time frame. If an ANI can offer such spectacular results, simply by scaling up its capabilities and using large amounts of data for training, think about what an AGI could do. And a hypothetical ASI (Artificial Super Intelligence)? We could hardly even imagine. However, that’s not necessarily a good thing.

Which brings us right back to the problem of AI alignment. Dr. Geoffrey Hinton, called the godfather of AI by some, recently left Google after ten years of spearheading their research and contributing greatly to the development of current LLMs. According to what he said to the New York Times, he believed that Google had handled AI responsibly, until Microsoft started incorporating a chatbot into its Bing search engine, thus making Google feel threatened. This is not at all unlike what is happening on a state level. Any significant development from China in AI research could make the U.S. feel like it should take more risks to avoid being left behind.

Dr. Hinton called AI a very “different kind of intelligence” compared to ours. He described it somewhat in terms of a hive mind, an entity composed of many thousand individual “brains” that can share new knowledge between them instantly. Could we hope to teach human values to such an alien entity?

Hinton wasn’t even the first researcher to quit Google. In late 2020, Timnit Gebru revealed that she was fired because she refused to retract her signature on a paper detailing problems with the way Google was handling AI research. She went on to found Distributed Artificial Intelligence Research (DAIR). Another former high-ranking executive, Mo Gawdat, who was the chief business officer at Google X, had this to say about how he perceives the seriousness of the problem:

“It is beyond an emergency. It’s the biggest thing we need to do today. It’s bigger than climate change believe it or not.”

These incidents and warnings seem to support Elon Musk’s statement that he has fallen out with Larry Page, co-founder of Google because he was “not taking AI safety seriously enough”. Musk claimed that his former friend wants digital superintelligence to happen as soon as possible. He co-founded OpenAI because of this very disagreement. Unfortunately, OpenAI itself proved to be less open than it was originally intended to be. Within less than four years, it turned from a non-profit to a capped-profit organization. That came just a year after Musk attempted to take over the company, and then left. Whether he truly believes the criticism he has since leveled against it or not is uncertain, but the fact that OpenAI received a 1 billion dollar investment (note: investment, not donation) from Microsoft shortly after changing its status to capped profit seems to agree with Musk’s claims. Whatever his intent may have been, OpenAI, under the wing of Microsoft, now seems to be in direct competition with Google on AI research, but also in corporate irresponsibility: about 2 months after it fired its ethics and society team, Microsoft’s corporate VP and chief economist, Michael Schwartz, casually stated at the World Economic Forum (WEF) that “we shouldn’t regulate AI until we see some meaningful harm.

OpenAI at least pretends to be more responsible than Microsoft, as early last month it announced the Superalignment project and committed to devoting 20% of its computing capacity to support the effort.

When looking at the recent history of AI research, a recurrent pattern seems to appear. Competition seems to be turning academic research away from its intended goals and into the hands of businessmen, which seems much less concerned with the warnings of respected researchers against recklessly pursuing progress in the field and are more focused on delivering results before the competition, no matter what. It is treated as just another race, like the space race or the nuclear arms race, ignoring the fact that this “weapon” has the ability to turn on its wielders on its own.

Even if we don’t believe the dire warnings of Yudkowski, Gawdat, and Dr. Hinton, the open letter calling for at least a temporary pause in training more powerful LLMs than ChatGPT should give us food for thought. Signed by Musk, Yoshua Bengio, Steve Wozniak, John Hopfield, and many others, some of whom were pioneers in the field of AI research, the letter points to the fact that we don’t know what to expect from a true AI.

The dangers of an out-of-control AI race are not ignored by everyone. The EU is spearheading the effort to impose some control on AI research, with the AI Act, the first legislative initiative of its kind in the world. The OECD has launched an AI Observatory initiative that has issued a list of Tools & Metrics for Trustworthy AI in order to provide guidance to companies. While this shifts the weight of responsibility to companies themselves in a non-binding manner, some claim that law and governmental oversight are too slow to effectively monitor such a rapidly growing field. Others, such as respected AI researcher Stuart Russell, are adamant in the need for strict government control, should corporations continue on their wild AI race unchecked. OpenAI’s CEO Sam Altman stated that his company would leave the EU should the AI Act prove difficult to comply with.

The sheer hubris of the idea that we can deal with the problem after it arises, being reactive instead of proactive, reminds me of the all-too-similar idea that we could control our climate when we needed to. Well, that time is now, but there is no technology yet that is ready to address the crisis. Similarly, our imagined self-importance has led us to believe that we could contain a hostile AGI or, worse yet, an ASI when the latter especially would be comparable to ants attempting to imprison a human.

But many don’t see it that way. They think that if true AI turned out to be hostile, we could take steps to control it. They forget that we have never had to face a non-human entity that is as smart as us or even orders of magnitude smarter.

They want to roll the dice.

--

--

Nikos Papakonstantinou
Lampshade of ILLUMINATION

It’s time to ponder the reality of our situation and the situation of our reality.