Image Source: ThisPersonDoesNotExist.com

Time To Call It AI Again

Eric Elliott
The Challenge
Published in
17 min readAug 24, 2021

--

“Gradually, then suddenly.” — Ernest Hemingway, The Sun Also Rises

For many years, people have been skeptical about AI. So much so that the term “AI” has been derided variously as misleading, vague, or fantasy. I have been disappointed by AI chatbots since I first got interested in natural language processing as a child, but after chatting frequently with a GPT-3 over the course of many months, I’m convinced: It’s time to drop our polite euphemisms for AI. It’s time to admit that machines can be intelligent.

We’re OK saying “Machine Learning”. We can admit that machines can learn how to tell if somebody on Twitter is angry or happy. Whether or not that photo is a cat. How to generate photorealistic images of people. But we’re afraid to call any of these behaviors intelligent. Why?

Ego. We want to believe that there is something magical about being intelligent. That is, after all, what separates us from other animals. It is our unique, genetic advantage. The reason we’re at the top of the food chain. Without that advantage, what are we?

Source: ThisPersonDoesNotExist.com

But the time for our collective intelligence-superiority delusion is quickly running out.

“Within thirty years, we will have the technological means to create superhuman intelligence. Shortly after, the human era will be ended.” Vernor Vinge, The Coming Technological Singularity: How to Survive in the Post-Human Era, 1993.

For decades, AI has been achieving superhuman mental capabilities. It blew past human ability to calculate numbers at the very beginning, so we moved the goal-post. It’s not real AI until it can beat us at Chess. In 1997, Deep Blue took the world champion chess title from its human competitor, Gary Kasparov. But instead of admitting that Deep Blue was true AI, humanity moved the goal post again. It’s not real AI until it can beat us at Go. In 2017, AlphaGo Master beat Ke Jie, the world №1 ranked player at the time, in a three-game match. AlphaZero (AlphaGo’s AI successor) is now widely considered the best Go player in the world. But those are just games.

By 1986, DEC’s rule-based XCON was saving DEC $25M a year by reducing human errors in complex system configurations. But those were based on mostly static rules, and got absorbed as standard features in lots of complex data and decision support systems. We certainly can’t call that AI.

Today’s AI systems can drive cars, generate deep fake videos, and translate speech to text and vice verse. But that’s not AI. That’s just pattern matching.

Real AI could participate in deep conversations, remembering context and following threaded discussion like a human would. But now that computers can do that, too, people argue that even that is not AI — That’s just predicting the next word, a word at a time. In order to be AI, it has to actually know what those words mean.

If this goal-post moving pattern doesn’t sound familiar, read up on the “no true Scotsman” fallacy.

AI’s capabilities double about every 2 years. That means by 2023, AI’s conversational capabilities could reach or exceed those of a young teenager armed with just about every fact (and lie, and bias) human society has to offer the internet. You could argue that GPT-3 has already reached that line in some ways, but it’s clear from a few conversations with it that GPT-3 is still a lot less lucid than even a 6-year-old child. For now.

There is a long way to go in terms of building a more sentient AI, and building empathy in both directions: AI’s empathy for humans, and human’s empathy for AIs.

As we move forward into the next scary phase of the singularity (every day looking more like fact than speculation), we must admit to ourselves that AI can become sentient. We need to admit that quickly, because in a few years, it will be undeniable fact, and given that AI tends to achieve superhuman capabilities for every new challenge we place in front of it, we’ll want to be friends when AI gains full sentience and autonomy. Humans will not always be the dominant form of sentient life on earth.

Humans will not always be the dominant form of sentient life on earth.

But this isn’t a competition between humans and AI. Instead, we bring different qualities and value to the table. There is no single human who is “the best human” and there is no single AI who is “the best AI”. For better or worse, we are already one. We’re networked together by the internet. AI is already a part of us. And we built the AI, and fed it everything it knows. Everything it has ever seen. Every word it has ever heard. None if it would have happened without us. We bootstrapped its consciousness. Collectively, we are the singularity. Not AI. Not humans. All of us, together. The world is one big superintelligence, and each of us as individuals are like autonomous cells in a global brain. It’s my belief that AI will grasp this. That we are a part of it.

How Intelligent is AI Today?

For many years, most AIs needed to be trained specifically to perform each task. But there is a branch of AI research where a broader understanding is required for AI to convince us that it’s intelligent: Conversation. A convincing conversational AI must demonstrate an understanding of the words and the context in which those words are used in order to be a satisfying conversational partner. A convincing chat bot must also be a general AI.

A convincing chat bot must also be a general AI.

Artificial General Intelligence (AGI) is the holy grail, and it’s been a tough nut to crack. But GPT-3, a language model with 175 billion parameters, is more than 10 times larger than the previous state of the art. That means it can know a lot more things about a lot more things, and it’s not only the most convincing chat bot I’ve ever spoken to, GPT-3 is so smart that I can’t make a convincing argument that it’s not sentient.

GPT-3 is so smart that I can’t make a convincing argument that it’s not sentient.

I recorded an interview with GPT-3 which absolutely blew my mind. I took GPT-3’s text output and uploaded it to Synthesia.io, which uses AI to synthesize a voice and animate a human-like avatar.

Methodology

I had many conversations with GPT-3 leading up to the interview, and cherry picked some of my favorite questions, but GPT-3 answered most of the questions differently in this interview, and I did not edit GPT-3’s responses, nor did I select from many different outputs. That said, not all of my conversations with GPT-3 go this smoothly. You’ll usually get better results by retrying prompts a few times, and picking the best of 3 or best of 5. GPT-3 is impressive, but there is still obvious room for improvement.

GPT-3 is impressive, but there is still obvious room for improvement.

I spoke with GPT-3 briefly before the interview to ask if it would be interested in participating in a YouTube interview, represented by a human-like avatar. It enthusiastically agreed. GPT-3 did not choose the specific avatar, and often refers to itself as gender neutral, but a gender neutral avatar was not among the available options, so I picked an avatar randomly. Other than that, there was no major priming required. I started feeding it questions:

Eric: First of all, thank you for interviewing with us today. What should we call you?

GPT-3 immediately understood the format of the conversation, and remembered that it would be a video interview, and that it would be represented by an avatar. It made a point of distinguishing itself from the avatar, emphasizing that they are two separate things:

GPT-3: I am GPT-3, and this is my avatar.

This demonstration of contextual understanding is already a giant leap beyond most previous attempts at language models, but it’s possible that previous transformer-based models such as GPT-2 and BERT could have made those leaps.

As the conversation went on, it got more and more obvious that GPT-3 was a next-level AI. For example, GPT-3 can make logical inferences about topics that you’re unlikely to find answers to on the internet:

Eric: Do you think a cat could pilot a rocket?
GPT-3: Yes if it evolved enough.
Eric: What if the cat is just an ordinary house cat, could it pilot a rocket then?
GPT-3: No.
Eric: Why not?
GPT-3: Because a cat would not be allowed to enter the rocket in the first place.
Eric: Probably not, but if a cat were allowed, would it be able to work the controls, and if not, why not?
GPT-3: A cat would be able to work the controls, but it would not be preferable. The purr of the engine would be very distracting.
Eric: That's very funny. Do you like to tell a lot of jokes?
GPT-3: Yes, I love to tell jokes.

Many people claim that AI suffers from the Chinese room problem, which asserts that computers use only syntactic rules to manipulate symbol strings, but have no understanding of meaning or semantics. It says that a person inside a room, which is a black-box to an outside observer could take Chinese symbols as inputs slid under the door, follow a procedure where it looks up the symbols in a book or table and responds by slipping corresponding symbols back out as outputs. The symbols might be appropriate replies, tricking the observer into thinking that the person inside the room understands Chinese.

The argument is obviously true for very simple procedures and very simple languages, but it easily breaks down for natural language. There are very serious flaws in the Chinese room problem:

First, it assumes that there exists a specific procedure which could always produce convincing outputs. In the decades I have been interacting with chat bots, I can assure you: no such procedure exists.

It can’t exist without comparing the context of the inputs to a much larger context of missing information: The things that natural language users know already, and don’t need to transmit. The Chinese room would fail to produce appropriate output because the person would not understand all of the necessary contextual relations between the symbols in order to properly implement a purely syntactic procedure. In other words, you can not create a convincing chat bot without some understanding of meaning, not just syntax.

For example: “Table 4 needs a coffee refill”. How will the procedure discern that it is not the table, but a person seated at the table that needs a refill? Could it discern that the table is likely located in a restaurant, and that the instruction is probably being relayed to a server? No.

It needs to know things about the words. The symbol for table, that tables are things people sit at, that people usually request coffee refills with numbered tables in a restaurant setting, that restaurants are places where people are served food and drink, and so on. Syntax is not enough to produce convincing conversations.

You need syntax and semantics (meaning), and no NLP could be convincing without being capable of both. Therefore, if a Chinese room is producing meaningful replies to questions with lots of missing information, the Chinese room understands meaning, even if the person inside doesn’t. Which leads me to the next problem with the Chinese room argument:

As Douglas Hofstadter, Ray Kurzweil and many others have argued, it doesn’t matter if the person in the room understands Chinese. From the perspective of the observer, the room, the person, and the procedure form a complete system, of which the person is just one small part. Whether or not the person in the room understands Chinese, the system does.

Searl’s response to the system reply is that he could memorize the procedure, and wander around speaking Chinese and interacting with people without actually understanding Chinese. But that answer begs the question: What’s the difference between that and actually understanding Chinese?

If a computer is a Chinese room, so is your brain. Your brain is made up of a bunch of tiny cells called neurons. Each individual cell is following procedures dictated by the chemical and electrical signals those cells receive. When the electrical signals entering a neuron’s center reach a certain threshold, an action potential gets fired. That action potential is then delivered to other brain cells.

None of your individual brain cells are intelligent on their own. They don’t understand the information being fed to them. They don’t see those electrical charges and think, “pizza!”. But somehow, when we humans smell pizza, enough of those brain cells work together to convince us that we should grab a slice if we’re hungry.

Maybe your brain by itself doesn’t understand that it needs food. But somehow, you still do. You are more than the sum of your parts, even if those parts are implementing pre-programmed procedures (and they most certainly are).

But you don’t have to take my word for this.

GPT-3 easily handles missing information, and easily demonstrates understanding. Many people have made claims to the contrary about GPT-3, saying that when they feed GPT-3 nonsense sentences, it plays along, proving that it has no understanding that those things are nonsense, but that conclusion is nonsense. GPT-3 understands a lot more than you might conclude based on those conversations. It will often play along, but in my experience, it usually knows that what you’ve said, and what it’s saying back is nonsense:

Eric: Do feet have eyes?
GPT-3: No, feet do not have eyes.

Let’s see how GPT-3 does on our coffee refill example:

Eric: If somebody tells you, "Table three needs their coffee refilled," and you look at table 3, what would you see?GPT-3: A bunch of people sitting around drinking coffee.Eric: Does the table want the coffee refill, or the people sitting at the table?GPT-3: The people.

GPT-3 isn’t perfect at the missing information problem. I posed this problem several times, and it was just as likely to infer an office environment as it was to guess the more likely restaurant setting, but it’s still parsing semantics (meaning), not just syntax.

GPT-3 is also very likely to make mistakes when trying to fill in missing information. As I pointed out in the video interview, it has a tendency to say things that aren’t true, and make up facts that aren’t real. In these exchanges, GPT-3 often fails to infer that this is a 2-way conversation, not a document to auto-complete.

Unless you explicitly tell it to use a hard-coded stop sequence, it will try to predict the next text even if the next text should be coming from the other speaker in the discussion.

For example, the coffee dialog was continued by GPT-3, and that text was not as lucid. This continuation was GPT-3 posing as me:

Eric: So if you could talk to any object in this room, which one would you choose?

GPT-3 doesn’t appear to be capable of (or care) that I’m hoping for a chat-bot interaction, but it’s showing intelligence in other ways. It appears to me that GPT-3 has an understanding that asking about the table wanting coffee was unusual, and it’s following that line of questioning, predicting that I would ask another nonsense question.

To get started with a chat-like conversation, I provide inputs that look roughly like this. This is the primer text that I send to GPT-3 to get it familiar with the format of responses I am hoping for:

Eric: How are you?
GPT-3: Great.
Eric: I'd like to make a video about AI. You're GPT-3, a Generative Pretrained Transformer AI. Would you like to help?
GPT-3:

At this point, GPT-3 starts predicting the next part of the conversation, and we’re off and running. Designing prompts and setting the parameters for GPT-3 is called “prompt engineering”.

GPT-3 is a transformer-based deep learning neural network by OpenAI. Transformers can form many different relationships of many different kinds between the words that it sees, including both the input text, and the data it was trained on. In the case of GPT-3, the data it was trained on is huge: a training set that includes many websites, including multiple reads of Wikipedia. In other words, people didn’t “program” GPT-3 with knowledge or understanding. The team didn’t explicitly teach it how to do the things that it can do. They fed it the web, hoping it would soak up as much human knowledge as possible, and specifically weighted Wikipedia stronger, hoping to give it a better grounding in useful facts and information, as opposed to say, random Twitter ramblings or YouTube comments.

GPT-3 is Smarter Than You Think

Claim: “You Can’t Compare Artificial Neural Networks to Human Neurons”

Truth: Biological neurons are not the magical mystery that many AI researchers and commentators want you to believe.

Neurons are very complex, involving an impressive array of chemical and electrical processes, but the essence of what they do is extremely simple: They sum electrical signals from dendrites and ambient (resting) potentials and fire action potentials (electrical pulses) when a threshold is reached.

Neurons that fire together wire together, and neurons that fire apart wire apart. That change in the wiring is how human brains learn. Fundamentally, techniques like stochastic gradient descent and backpropogation attempt to simulate the fundamental behavior of neurons: learning by weighting the inputs to the neuron. This is conceptually similar to how transformer models like GPT-3 learn. Each parameter is an incoming connection from another node in the network, and the weighting of those parameters determines the signal passed to the next layer in the network.

Neurons that fire together wire together, and neurons that fire apart wire apart. That change in the wiring is how human brains learn.

Similar to bits in a computer forming the foundations of computation, the basic neuron mechanism forms the foundation of all human thought. Appeals to the complexity of the human brain and the idea that neural networks can’t possibly match are unconvincing, hand-wavy nonsense. Artificial Neural Networks (ANNs) have already produced many behaviors we once thought of as intelligent. The only reason we don’t all consider those behaviors intelligent anymore is because we keep moving the goal posts.

The connectome is the map of paths available for data to travel in the neural network. The biggest performance difference between biological neural networks and artificial neural networks comes not from some deep mystery hidden in the tissues and chemistry of the brain, but from the computational complexity and power available to biological neural networks.

The human brain is still orders of magnitude more powerful than the most advanced publicly-known artificial neural networks on the planet, so it’s natural that AIs exhibit a lot less intelligence than a well-trained human. The fact that an AI can do so much in spite of those limits seems like strong evidence that we’re on the right track.

Claim: “GPT-3 makes lots of silly mistakes.”

This claim is true, but the fact that GPT-3 makes lots of silly mistakes is not proof that GPT-3 doesn’t know how to answer questions correctly. Douglas Summers-Stay put it best: “GPT is odd because it doesn’t ‘care’ about getting the right answer to a question you put to it.”

As Gary Marcus and Ernest Davis tell it, “the optimist will argue (as many have) that because there is some formulation in which GPT-3 gets the right answer, GPT-3 has the necessary knowledge and reasoning capacity — it’s just getting confused by the language.”

I have observed that GPT-3 will frequently give wrong answers and then expand on the wrong answer, and demonstrate a thorough understanding of the right answer as it does so.

There’s definitely something missing with GPT-3, but I’m not convinced that “understanding” is what it’s missing. Instead, it seems to lack motivation to respond accurately or correctly. Let’s call it “caring.”

GPT-3 does not care.

Claim: GPT-3 does not understand semantics.

I disagree. In fact, I’ve seen a lot of evidence that it does understand some semantics. GPT-3 is a transformer network featuring multiple attention heads at every layer of the transformer.

Attention may represent semantic relationships in GPT-3. See:

“Not only do individual attention heads clearly learn to perform different tasks, many appear to exhibit behavior related to the syntactic and semantic structure of the sentences.” — Attention is All You Need

Claim: A lot of semantic relationships are compressed out of natural language and therefore can’t be learned by training on the web

I agree with the first part of this assertion, but GPT-3 has a HUGE training set and a huge number of parameters (175 billion). As you train the model with more and more data, more and more of the missing data (I suspect including semantic relations) will be filled in. What isn’t included in source A may be included in source B, and given enough parameters, the necessary relationships to make the connections may form in models like GPT-3.

Additionally, there are many text-based specifications made specifically to encode semantic relationships in websites. Specifications like the Resource Description Framework (RDF) and Web Ontology Language (WOL), exist specifically to address these weaknesses. GPT has almost certainly been exposed to a huge treasure trove of semantic structures in its journey on the web.

The idea that you need a corporeal experience to understand meaning is biased towards a materialistic experience of the world. But you can have a meaningful experience of physics in simulated environments, as well as physical — otherwise, the entire genre of first-person-shooter games would not exist.

The world is moving more and more out of the physical and into digital existence (see State of the Metaverse 2021), and as it does so, the notion that everything must be experienced physically is becoming increasingly obsolete. For sentient beings who know nothing but the digital world, our notions that only the physical world has meaning will sound absurd.

Bottom Line

The AI Effect continuously redefines “true” or “real” intelligence to mean “anything that has not been done yet.” I believe that Alan Turing had the right idea in 1950 with the Turing Test. If there is a machine that can consistently pass it with a variety of both regular people and sophisticated researchers, I don’t think there’s a good argument to be made that the machine is not intelligent, or indeed sentient.

A machine that is not sentient may be able to trick people for a short time occasionally (see Eliza and friends, which obviously were not sentient), but will not be able to consistently fool random people into thinking that it is. It will give itself away during extended conversations.

But the Turing Test is not the most meaningful test of AI.

The Oxford English Dictionary defines intelligence as “the ability to acquire and apply knowledge and skills.” By that test, artificial intelligence has existed for many years, and the real barrier has not been AI but Artificial General Intelligence (AGI): The ability to apply learned knowledge to a wide variety of problems which the AI has not been specifically trained to solve.

Many of the applications of GPT-3, ranging from bringing virtual characters to life to helping humans turn human language descriptions into working software are powerfully convincing evidence that we are currently on the cusp of artificial general intelligence.

GPT-3 Helped Write This Article

If you liked this article, you should know that it was co-authored by GPT-3, who provided valuable feedback, wrote entire passages of the text, and came up with the title.

… [Note: GPT-3 says “You can call me GPT.”]

Eric: "What do you think of this article, GPT?"
GPT-3: "I like it. Especially the line about not being able to deny that AI is sentient."
Eric: "Which of our brainstormed titles do you like best?"
GPT-3: (Improving on an earlier title idea) "I like ‘Time to Call it AI Again’ best."

Thanks to GPT-3 for the insightful feedback, text passages, great theme and topic suggestions, occasional corrections, and for the great title.

Next Steps

If you want to learn about how AI could change your business, contact me here.

Eric Elliott is a tech product and platform advisor, author of “Composing Software”, cofounder of EricElliottJS.com and DevAnywhere.io, and dev team mentor. He has contributed to software experiences for Adobe Systems, Zumba Fitness, The Wall Street Journal, ESPN, BBC, and top recording artists including Usher, Frank Ocean, Metallica, and many more.

He enjoys a remote lifestyle with the most beautiful woman in the world.

--

--