Why ChatGPT will be a game changer

Google is done. Here’s why OpenAI’s ChatGPT Will Be a Game Changer

Only a day after its release, the internet is already flooded with content about OpenAI’s latest tool. Let’s dive into why it’s gonna be a game changer, shall we?

Luca Petriconi
4 min readDec 1, 2022

--

Yesterday, on November 30, Sam Altman announces the launch of ChatGPT. Since then it has taken the internet by storm. And not without a reason. As you will see, this new AI could be a complete game changer.

The model is fine-tuned model based on the GPT-3.5 architecture and interacts in a conversational way — just like a chatbot. The model was trained using Reinforcement Learning from Human Feedback (RLHF). In addition, the guys at OpenAI used supervised fine-tuning to improve the model. There’s a big difference though:

Not only does it emulate different styles (see below), but it also remembers what you as a user have told it before and can retell you.

Why This Could Mean the End Of Google

… or Stackoverflow.

We live in what feels like rapidly evolving times (even though Peter Thiel might disagree). There are new AI tools coming out on a fairly regular basis now. With the rise of general adversarial networks, or GANs, and generative AI tools like Dall-E or Stablediffusion, we’ve seen a couple of mindblowing things. Being able to create an image from a mere text input almost feels normal now because we grow increasingly used to these kinds of things.

Here’s why ChatGPT is different, though.

The way you interact with the ChatGPT feels different because you speak to it directly as if it were another person. There’s an exchange going on. A back-and-forth. Almost as if you were talking to a person. In fact, it makes you second-guess if you’re talking to a person or an AI.

There’s one more aspect that makes it feel different: It remembers what you've previously said. This means you can iteratively work towards the desired result. In other words, you’re having a conversation.

Sam (@sama), OpenAI’s CEO, puts it like this in one of his tweets:

“language interfaces are going to be a big deal, i think. talk to the computer (voice or text) and get what you want, for increasingly complex definitions of “want”! this is an early demo of what’s possible (still a lot of limitations — it’s very much a research release).”

He adds that this interface seems plausible not only for this type of application but also for a future, more complex application. And if you think about it, it makes sense: Conversing is the natural way for humans to interact with one another and convey their thoughts. Or is it thinking? (I’m saying this at a moment in time when Elon Musk claims Neuralink is “six months” away from its first human trial.)

Let’s stick with conversing for now. Let’s say you want to write an article on a given topic. For example, you could ask it to write it for you. Depending on the output, you might want to refine and modify it. That’s exactly what Twitter user @goodside did:

Yes, It Writes Code, too

There’s been recent hype (or worry?) around AI-assisted coding tools like GitHub Copilot. But with the amount of detail in the output of ChatGPT, the fact that it writes code, too, almost seems like a side-effect.

It’s impressive to see that it combines text and code — with formatting and commenting and all — in its output.

What amazes me most is one fact: It doesn’t only output the code but it also explains it. For example, imagine you're debugging and giving it code as input while asking it to find the bug. The output could look something like this:

While seeing this you can easily think: “Why still search StackOverflow?”. And you’re right. How many times have you searched for a very specific question on StackOverflow and you didn't find an answer? And most importantly how much time did you spend finding it (if you did)? There are a few things to keep in mind though — for now.

Not so Fast…

While I don’t question that this is one of the most impressive things I’ve recently seen, it won’t mean the end of Google (just yet).

Of course, there are a few limitations to ChatGPT.

  • For example, it won’t tell you about anything in the recent past (in 2022), since it’s been trained on data reaching back until Q4 of 2021.
  • It sometimes outputs plausible-sounding but incorrect answers, which is a challenging issue to fix according to OpenAI.
  • It can be excessively verbose and overuse certain phrases.
  • As far as content moderation goes, the model was trained to refuse to output inappropriate content. However, there might be some false positives and negatives.

While the model will be improved upon iteratively, this is already quite impressive. What I find the most impressive is the ease to interact with it and the seeming accuracy of the outputs.

The model is open to playing around with, here. Let me tell you in the comments below what you asked and what your thoughts about it are.

For more on data-related topics and fun tech stuff, you can find me on Twitter and LinkedIn.

--

--

Luca Petriconi

Exploring my curiosity and documenting my journey to become 1% better every day. Tech and startup business. | Head of Growth @ElevateX