THOUGHTS ON AI

What is ChatGPT, Really?

Probably not what you think

Kiall Hildred
The AI Academic

--

A thin orange line with a 2D-animated fox’s head in the middle, being used as a text separator.
A young white woman with brunette hair styled in two plats either side, wearing a short skirt, small white socks, and a white hoodie with three black stripes running down the top of both sleeves, sitting against a white wall on a white blanket seemingly on top of a bed, with her knees bent and her feet to one side and a Macbook resting on her lap.
Photo by Laura Chouette on Unsplash (cropped)
A thin orange line with a 2D-animated fox’s head in the middle, being used as a text separator.

Yes, another #!€%ing article about ChatGPT.

Although this one is not about it coming to take our jobs or putting Google in the ground, and it’s definitely not a writer’s battle cry that it can have my words when it takes them from my “cold, dead hands”.

Well, kind of. But it’s more an analysis of what the thing actually is and what it’s not.

Any AI has to be evaluated against what it actually does.

If I try to use a bus as a submarine, it’s also not going to pass my expectations. Although, this is not, in essence, an evaluation of ChatGPT.

For one very important reason: it will change.

Any cross-sectional evaluation of an AI is equivalent to sampling a batch of brownies before they’re cooked: some people think they’re good before they’re even in the oven, others will deny they’re cooked even though they’re charring, others are in the corner eating raw flour in fear or off trying to sell the brownies as a weight-loss solution.

ChatGPT will get better. So making any evaluation of it now will be horribly outdated in a week, a month, or a year.

A orange line with rounded point being used as a text separator.

What’s The Point?

The main point of this article is to think about what it is currently good at and what it will get better at.

For that, I asked ChatGPT to tell me about itself. Specifically, I asked it “How do you work?”. This was its response:

“I am a language model trained by OpenAI. I was designed to process and generate human-like text. I do this by analyzing and understanding patterns in large amounts of text data, and then using that understanding to generate responses to new prompts.

To generate text, I use a process called “prediction,” in which I consider the words that have come before a given point and use that context to try to predict what word or words might come next. I do this using a statistical model that has been trained on a large dataset of texts, and the quality of my responses is determined in part by the size and diversity of the dataset that I was trained on.

I am able to provide responses to a wide range of prompts and can carry on conversations on a variety of topics. However, my responses are generated based on patterns in the data that I was trained on, and I do not have personal experiences or access to the internet, so my responses are limited to the information that I have been trained on. Is there anything else you would like to know about how I work?”

A screenshot of ChatGPT’s response to the prompt: “How do you work?”
Screenshot. ChatGPT’s response to the question “How do you work?”

That was a few weeks ago. I was going to ask for further comment as part of this article, but it’s currently at capacity.

Its response tells us a few very important things about what it is and what it can and can’t do:

  1. It doesn’t have access to the internet, yet.
  2. It only knows the data it was trained on.
  3. It’s not a writer, it’s a predictor.
  4. It does not think. (I’m guessing.)
A orange line with rounded point being used as a text separator.

It’s not a search engine.

In the future it most likely will have access to the internet. That’s probably what’s gotten Google shaking at the knees, because that will be bad for Google.

But the threat it poses to Google depends on how people use Google.

If I ask Google for a specific piece of information, like the speed of sound in aluminium, I will most likely get a singular answer at the top, with a list of other sources related to my question underneath it, as below.

A screenshot of Google search results to the prompt: “What’s the speed of sound in aluminium”
Screenshot. Google search results for “what’s the speed of sound in aluminium”.

Of course, Google doesn’t know and can’t tell me if this information is correct. If I want to verify that information, I would need to go to the website the information came from and determine whether it was a reliable source.

(If I really wanted to verify it, I would have to get a piece of aluminium and do the measurements myself.)

You can currently ask ChatGPT for that first kind of information, and it will give you a response. But the verity of that information is on no more (and probably far less) stable ground than what Google dishes up.

If ChatGPT hasn’t been trained on the data you’re asking for, then how would it produce it? And if it has been trained on that data, how does it know it’s been given the correct data? How do you know it’s been given the correct data?

If it’s producing “creative” sentences (in the sense of producing semi-unique strings of every time you ask it something), which would be the hope for an AI, then how do you verify that information?

You’re probably going to Google it anyway just to check.

Of course, if ChatGPT has access to the internet, and you ask it for specific information, well then it can give you the information Google would give you, but it can’t really do a better job than Google, at least from the perspective of a discerning fact-finder.

It might be able to present the information in a more user-friendly way, but how can it verify the information? It can’t. You have to do that — Google, ChatGPT, Ask Jeeves or otherwise.

But this isn’t the only kind of information I or anyone else might use Google for. If, instead, I wanted to find restaurants in my area, getting a singular value, or even just a list of restaurants from a single website, isn’t really what I want.

This is even truer if I look up “funny cat videos”. I want to peruse the selection of websites offering cat videos. I don’t just want a single option, or even a list of links. I want to have the opportunity to find something incredible I wasn’t expecting or didn’t know existed.

Often, I use Google like I use YouTube. I rarely search for an exact thing; I just want to see what’s on TV.

Again, buses and submarines.

I’m not calling out ChatGPT for being a terrible replacement for Google — ChatGPT isn’t Google, that’s not what it was built as.

It’s a highly sophisticated chatbot, not a search engine.

A orange line with rounded point being used as a text separator.

It’s also not a writer.

I know I said this wasn’t a writer’s call-to-arms. It’s not. I don’t mean that it can’t sound like a writer. It’s really good at responding to prompts in a way that is almost indistinguishable from a human. Almost.

But prediction, the method it uses to produce text, is not how humans write. That’s why it’s called ChatGPT, and not WriteGPT.

It’s good at having conversations, because conversations are fairly predictable. And it’s good at writing things like cover letters, and articles and code, because these forms of writing are wildly predictable — cover letters are all the same.

The limits of ChatGPT as a writer become apparent when you ask it to write a poem.

Writing a poem by predicting what’s likely to come next in a poem is a horrible way to write a poem. It’s a horrible way to write most creative forms of writing, but for poems it’s especially bad.

When you tell it “Write me a poem” it doesn’t look out the window and think about its strict Victorian upbringing and the long, cold winters and the many friends it has lost to disease and delinquency and returns you a sonnet.

It just sees the word “poem” and goes “I have all these instances of poems I’ve been shown. Here’s something that looks like one.”

And from that it mostly writes a list of words that from a distance looks like a poem, but up close is just a paragraph cut into little sections that occasionally rhyme with previous ones.

A screenshot of ChatGPT over capacity screen, which says “ChatGPT is at capacity right now”
Screenshot. ChatGPT’s poem about it currently being at capacity.
A orange line with rounded point being used as a text separator.

Even if you give it access to the internet so that it can draw from more, better and constantly updated sources, if it’s still just doing predictive writing, it’s hard to tell whether it will actually get better at writing proper.

That’s because writing isn’t just a process of producing grammatical sentences, it’s a way of communicating ideas.

So you have to have ideas in order to communicate them.

ChatGPT doesn’t have ideas, because it’s not conscious, and it doesn’t think (I’m guessing).

It’s not a general intelligence.

It’s really good at seeming intelligent, because being able to reliably produce natural-sounding text is a good marker of intelligence.

But if you feed it a riddle it won’t figure it out, because figuring things out isn’t what it does. Of course, if you tell it the answer, it will learn and be able to solve that riddle and similar ones in the future — that’s what it does.

So it may get to a level of sophistication or gain some new technique of producing written language that makes it difficult to tell that it’s not thinking.

But there’s a difference between seeming intelligent and actually being intelligent. At least in the sense that we think of when we think of our own intelligence.

A orange line with rounded point being used as a text separator.

Turing Test

If you’re familiar with computers and computation and intelligence you might be waving your finger saying “Umm, what about the Turing Test. It clearly passes that.”

Sure, but the Turing Test has already been well-surpassed by a lot of chatbots.

Regardless, the test isn’t a perfect measure of intelligence, and the outcome depends on the level of intelligence of the evaluator and their level of scepticism about intelligent systems.

In Turing’s time, it would have seemed like a good measure of an intelligent system because he wasn’t living in a world plastered with “smart” devices and AI.

We’ve managed to make a lot of things that can replicate behaviours we’d previously thought would require general intelligence. But I would guess most of us still aren’t convinced that any current iteration of an AI is intelligent in the general sense.

Of course, you can never know. But I mean, how do I know that my coffee cup isn’t also thinking, or the unfortunate chair I’m sitting on? Or alternatively, how do I know that anyone else around me is thinking?

I’ll leave that one with you.

I’m generally optimistic about AI. And I generally think ChatGPT is incredible, and it’s only going to get better. But I don’t see it as a threat to writing.

No matter how good the writing gets, no matter how smart it seems, no matter how much material it can draw from, no matter how good it gets at sounding like a particular writer, no matter how good it gets at sounding like me, no matter how much of a better writer it is than me, even if it can somehow predict the precise words that will come out of my mouth or my fingers, if you want to read my writing, if you want to know what I’m thinking, you still have to get it from me.

Thin orange line with “Kiall Hildred” on the and end at the right, acting as a text separator.

I’ve decided to keep all my writing on medium free of paywalls. So if you like my writing and want to help me write more, you can buy me a coffee, subscribe, or hire me via Upwork or email: kiall.hildred@gmail.com

A thin orange line with a 2D-animated fox’s head in the middle, being used as a text separator.

--

--

Kiall Hildred
The AI Academic

I write about science, psychology, philosophy and life | Pubs: /the-thinkery & /the-ai-academic | Get my articles by email: kiallhildred.medium.com/subscribe