Why Do We Think AI Knows Stuff?

It is because of our knowledge of language

Martina Wiltschko
Brain Labs

--

AI hallucinates. It makes stuff up. And this is widely known. You can look it up on Wikipedia. Yet, we ask AI all sorts of things, unimportant and important. And it looks like we just pretend that there is no problem. As if AI knows stuff.

But why?

I think I know why. I think there is a linguistic reason.

Let me start with a little detour about the expression of knowledge.

How do we express certainty?

Let us abstract away from all sorts of philosophical problems that come up when we try to define knowledge. And let us assume this:

We think we know something when we are certain about it. (Never mind those that disagree with us.)

For example, when I am outside and I see and feel rain pouring on me, I am certain that it is raining. And I would say:

“It’s raining.”

Image created on Canva by Martina Wiltschko

I express my certainty with a basic declarative sentence.

When I am not certain about something, then I change the way I talk.

Suppose I am lying in bed inside my house, and the curtains are closed. Suppose I vaguely hear rain pouring on my roof. Then I would add something to this basic declarative sentence.

“I believe it’s raining.”

Image created on Canva by Martina Wiltschko

Or I could add a modal verb (like must) or an adverb (like apparently).

“It must be raining.”

“Apparently, it’s raining.”

I would never add these forms when I am absolutely certain. Just imagine someone standing in the rain, soaking wet, and asserting: I believe it’s raining. (Maybe they are being funny, ironic, or inclined to make some kind of clever point, but they are not sincere.)

Image created on Canva by Martina Wiltschko

When we are certain, we use the most basic, simple declarative sentence: “It’s raining.”

I can, of course, express certainty a little more elaborately by explicitly adding that I know what I’m talking about.

“I know it’s raining.”

Image created on Canva by Martina Wiltschko

But it’s a little weird to say this when it’s obvious that I know.

Hmm…? Why would that be weird? I could say it this way when I want to contrast my certainty with someone else’s (and when I know they are wrong).

Image created on Canva by Martina Wiltschko

Otherwise, I just use the simple declarative sentence: “It’s raining.” That’s the quintessential expression of knowledge. (And from what I can tell, this is true in all languages.)

So now we are almost ready to get back to our initial question: Why do we think AI knows stuff?

First, we have to consider another question.

How does AI tell us stuff?

Next time you talk to ChatGPT, pay attention to how AI tells you stuff. Here is an example that I generated as I was writing this post. I asked ChatGPT4 what the weather was like, and it told me it needed my current location. So I told ChatGPT that I was in Barcelona. And now, it told me (quite elaborately) that it was raining.

“Currently, in Barcelona, it is raining with a temperature of 55°F (13°C). The weather includes passing clouds alongside the rain.”

Generated with ChatGPT4 by Martina Wiltschko on February 17th 2024

But what was actually true was that it was sunny and 18 degrees. I know because, as I was talking to ChatGPT, I was sitting in a café in Barcelona, and I was looking out the window, and I could see that the sun was shining. This was further confirmed by the weather app on my iPhone, along with the actual current temperature.

Pictures taken by Martina Wiltschko on February 17th 2024

So I know it was not raining! And I let ChatGPT know that it was wrong, and so it apologized:

“I apologize for any confusion or incorrect information. The weather can change rapidly, and my response may not accurately reflect real-time conditions. For the most current weather updates, I recommend checking a reliable local weather service or a weather app for real-time information. If there’s anything else I can assist you with, please let me know!”

Generated with ChatGPT4 by Martina Wiltschko on February 17th 2024

Note that its developers know that “ChatGPT can make mistakes” and that we should “consider checking important information.” This is even stated underneath my chat!

So, now I ask myself this: if ChatGPT “knows” that it does not know with certainty that it is raining in Barcelona, then why does it use the basic declarative sentence? Why does it not tell me that it believes that it is raining?

Consider what ChatGPT tells me when I ask if it believes what it tells me. It tells me that it doesn’t have beliefs or consciousness! And that

“it’s always a good idea to consult multiple sources or a human expert”

Generated with ChatGPT4 by Martina Wiltschko on February 17th 2024

AI doesn’t have beliefs because its responses are based on a vast amount of information it’s been trained on. AI doesn’t think the way humans do!

Generated with ChatGPT4 by Martina Wiltschko on February 17th 2024

So, ChatGPT does not believe things, but it also doesn’t know things. Experts call AI’s mistakes “hallucinations,” and they are well-documented.

But it still tells us stuff as if it knew. Well, at least, it tells us stuff the way humans tell us stuff when they know. AI uses simple declarative sentences. And because AI is not capable of "believing," it will not use the kind of language we humans use when we are not certain.

And now we are finally ready to answer our question.

Why do we think AI knows stuff?

We think AI knows stuff because of the way human language works.

It is part of our knowledge of language that we express certainty by using basic declarative sentences.

It does not matter what our conscious knowledge tells us or what the developers of AI tell us. We cannot help but interpret language in this way. Because AI talks as if it knows stuff, we think that it does in fact know stuff. (Even when we know that it doesn’t.)

That’s the power of our unconscious knowledge of language!

So it is up to our critical thinking abilities to use AI with caution. Precisely because it appears to be objective, it is crucial to know that it is not!

--

--

Martina Wiltschko
Brain Labs

I am a linguist. I study the knowledge that underlies language and how it allows and constrains social interaction. And I really like communicating my findings.