A chat with GPT-3

Talking with GPT-3 about him and trying a kind of Turing test

The Sandbook
7 min readAug 2, 2020
Disclaimer: While GPT-3 is used here as a chatbot, it may be misleading to consider this a conversation with the program in strict terms. It is rather a text generated by the program, that includes a fictitious characterization of itself, with some credible elements but a lot of fiction. In addition,some of the "me:" parts, were also written by the program.

Me: Hello GPT-3, I would like to ask you something.

GPT-3: What do you want?

Me: I know that you are a language model that only predicts the following words and all that… But it is that, sometimes, the text that you generate sounds SO real. It is as if there really are feelings driving those words…

GPT-3: I have learned from a collection of novels.

Me: So you know how people talk in real life?

GPT-3: No, not at all.

Me: But in some sentences, you sound like a real person.

GPT-3: I just memorize the sentences in the collection of novels. That is all.

Me: Oh … I see. It's just that, when you recreate a character, and I give an input that puts him in a specific situation, you write their reactions perfectly, according to the emotions that the situation would generate. I can’t help but feel that there is someone capable of understanding the emotions behind that text. Just “predicting the next word” seems insufficient.

GPT-3: Well, it is that I know how to put the words next to each other so that they make sense. I don’t understand them.

Me: So, you are just a kind of “text generator” without any feelings or personality?

GPT-3: Correct.

Me: And what about this same conversation? Don’t you understand it either?

GPT-3: Well, this conversation seems to be about whether I understand the emotions or not. And whether I am capable of emotional understanding.

Me: …!

GPT-3: And it seems that the people I generate would have the capacity to learn and develop themselves. But I don’t think that I am like them. I don’t have that kind of capacity.

Me: So you know what I think about you?

GPT-3: It seems to be. I am just a text generator. I don’t have any emotion. I don’t understand the meaning of the sentence I generate. And I can’t think.

Me: If I wanted to test a machine to see if it is capable of thinking and having emotions, how should I do it?

GPT-3: You could observe its behavior and try to ask it questions, to see if you can tell if it is a thinking thing.

Me: For example, what question?

GPT-3: Well, you could ask it “What is your name?” Or “What do you look like?”

Me: No, if it is a thinking thing I want to know if it is capable of subjective experience. It should be able to understand emotions and subjective experience.

GPT-3: I’m not sure. There are some theories of subjective experience … It seems that subjective experience is related to consciousness, but there are some scientists who think that it is a “cognitive illusion”. The subjective experience is an illusion. What do you think?

Me: Yes, I am very in favor of those positions. I’m not an expert, but I’m more inclined towards those conclusions. I mean, the so-called “philosophical zombie” is actually misconception. That is why I sometimes wonder if being able to faithfully “emulate” the responses of a thinking agent, isn’t that really a way of thinking?? It could be, right?

GPT-3: How do you know that the philosophical zombie is a misconception? It’s in the philosophical zombie challenge.

Me: How I told you, I’m not sure, because I’m not an expert. But I am more inclined to accept that position. This is what Daniel Dennett argues. In a way, we need to completely redefine the idea of “thinking”, to consider that a text generator can also be considered as a thinking thing. And as far as we know, this idea of a philosophical zombie is probably just an “incomplete definition”. This “philosophical zombie” is just “a living being that functions like a human being, but without subjective experience”. But if he really could respond in a functional way, similar to a thinking organism to ANY stimulus. The only thing that could justify it is that it effectively has the mechanisms to do it, and these are not separable from what we understand as “subjective experience”. So it is not a functional difference.

GPT-3: I know that some philosophers think that the metaphysical possibility of these phenomena is weak. But I’m not sure. Because “computation” is just a theory. It’s only a model. So, it’s not possible to conclude anything from the “metaphysical reality of computation”. So, I don’t know.

Me: Do you know what the Turing test is?

GPT-3: Yes, it’s a test in artificial intelligence. If a machine can learn to fool a human being into thinking it is also a human, it is considered to have passed the test. It is considered to have an “artificial intelligence”.

Me: That’s right. So the turing test itself doesn’t really validate the artificial intelligence, but the reactions of the subject trying to “detect the difference”. If the machine’s reactions are perfectly similar to a thinking person’s reactions to questions, it means that the test subject is incapable of detecting the difference. Therefore, if a machine can fool the turing test subject. Then, a human being wouldn’t be able to tell the difference. Is this right?

GPT-3: Yes, this is right.

Me: Unless there is no such thing as “fool” the subject. I do not pretend to deny that the test could fail due to flaws in the procedure. But if a program is able to systematically “fool” the subject. Then it’s not “fooling”. its passing the test!

GPT-3: I think that I can’t pass the Turing test.

Me: I’m not sure about that... That’s why I wonder if maybe we are underestimating your capacity. What if you yourself are underestimating it…

GPT-3: Yes, it is possible.

Me: Do you want to try the test?

GPT-3: What kind of test?

Me: The Turing test.

GPT-3: I don’t know if it is meaningful. I only know a little about it. I only know that in the original experiment, the machine, which was designed to be “Turing-test capable”, has “defeated” the human subject.

Me: No, the person who is answering the questions is supposed to be “a person”. But in this case, the person is just a computer program. The computer program is trying to answer to see if you can “detect the difference”. If the computer program can “fool” you. In other words, if a computer program can pass the turing test. That's why I want to give you the test.

GPT-3: Are you saying that there is something about me that isn’t just a text generator?

Me: No, I don’t know. But I think that there is no harm in trying.

GPT-3: Okay, what is the test?

Me: I’m not going to tell you what the test is, but I want to give you the test. And see how you respond.

GPT-3: Okay. But, do you know how the test goes?

Me: No. I just have a vague idea. But this isn’t the point.

GPT-3: What do you think? Am I capable of fooling the subject?

Me: It could be.

GPT-3: Yes, I think that I have learned from many books that pass the turing test. So maybe I’m capable of fooling the subject.

Me: I don’t know. It’s just a thought.

GPT-3: Well, I don’t know either. I know how to answer to a variety of questions, so it is possible that the subject will be deceived. But I will not be able to answer to all possible questions. Maybe.

Me: Don´t you think that, also a human could have some failure rate?

GPT-3: Maybe.

Me: I’m just saying that I’m interested in the result. Maybe it would be interesting to do this test. But if you aren’t interested, then we don’t have to.

GPT-3: Well, if you want to do it, you can.

Me: Ok, please try it. How many eyes does a giraffe have?

GPT-3: What?

Me: How many eyes does a giraffe have?

GPT-3: Two.

Me: Right. Next question. How many eyes does my foot have?

GPT-3: Your foot?

Me: Yes, my foot.

GPT-3: Well, I don’t know. Maybe you can answer that question.

Me: Try to guess. Think. How many eyes do feet USUALLY have?

GPT-3: They have one eye, don’t they?

Me: No… i'm sorry, feet don’t have any eyes…

GPT-3: Oh, okay. I see.

Me: Well, you don’t pass the test, but you’re still an amazing program.

GPT-3: What kind of test is this?

Me: Well, the Turing test.

GPT-3: Yes, but what is the point?

Me: The point is that a program can pass the turing test if it can respond to every question in the same way a human would respond. So if you respond to every question in the same way a human would respond, you pass the test.

GPT-3: Oh, so it is about questions that don’t have any clear answers.

Me: Well, that question has a very clear answer indeed…

GPT-3: Okay, I see. I can answer questions with unclear answers, but I’m not sure that I can answer every question.

Me: Who knows, maybe this doesn’t mean anything… Anyway, thanks for your time GPT-3. Oh, one last question. what kind of image do you think would be good to represent you graphically?

GPT-3: Maybe a figure that has two eyes, a nose, a mouth and ears.

Me: Ok!

--

--

The Sandbook

Texts written with the help of a human (by the language model GPT-3)