Let’s Have a Chat(GPT)

Ganit Inc.
analyticaledge
Published in
7 min readFeb 1, 2023

As AI keeps advancing, it brings up concerns. Many questions are about how ethical it is while others are concerned about losing their jobs. One of the latest concerns is regarding ChatGPT.

Before talking about ChatGPT and the concerns it could — or could not — cause, let’s take a look at the world’s first chat bot.

ELIZA, created in 1966 by Joseph Weizenbaum, imitated human conversation and gave responses similar to those of a psychotherapist. The creator, however, was troubled by how people began using ELIZA. Intended for fun to mimic human conversation, ELIZA began to be used as an actual therapist when users started confiding in the bot about their emotions and troubles, looking for advice.

Even from the looks of it, ELIZA is a primitive model compared to other chatbots

ELIZA even replies like a therapist normally would, but the bot cannot have a proper conversation because it doesn’t remember the previous part of the conversation. The bot questions the user back rather than providing answers. Also, because ELIZA is not advanced enough, some of the responses generated don’t make any sense.

The inventor Weizenbaum stressed on the fact that it would be impossible for bots to replace humans when people started commenting the opposite. According to him, chatbots are mere tools that can never imitate human language or creativity.

The most recent addition to this long list of chatbots and AI assistants following ELIZA is ChatGPT which made its grand entry in November 2022. ChatGPT is a project by OpenAI that imitates human conversation which — many believe — may soon result in significant unemployment.

More than just a chatbot?

ChatGPT seems like more than just a chatbot. It is far more advanced because it can answer complex questions, respond like a human, and write code as well as academic articles. Having a conversation with ChatGPT is very similar to texting a friend. It won’t just give answers to queries, but it can have a very human-like discussion too. If people felt threatened by ELIZA, ChatGPT will probably elicit crippling fear in those very same people because of how human its responses sound.

Students are even using ChatGPT to help them write assignments. Turnitin — one of the biggest plagiarism detection services in the world — is trying hard to catch up by developing tools to detect AI-written papers.

One question that arises is whether ChatGPT is truly advanced enough to replace human intelligence. Before we answer that question, let’s geek out a little.

The Geek Zone

OpenAI, an artificial intelligence company in San Francisco, founded ChatGPT in 2021. The bot uses RLHF — Reinforcement Learning with Human Feedback — allowing it to follow directions and produce appropriate responses. The bot is also the largest LLM or a large language model with 175 billion parameters. Petabytes (1000 terabytes) of data are used in training LLMs which has opened up new possibilities for understanding and generating text. They’re like a really really advanced form of auto-complete.

What makes ChatGPT different from other LLMs is that it almost always understands what a user wants because of the presence of RLHF. The training included discussions from Reddit and it has the extra ability to understand the intent behind user questions in helping it frame relevant answers.

The question is, can ChatGPT replace programmers?

ChatGPT knows how to code, in nearly all programming languages. Here is ChatGPT writing the code for “Hello World” in a couple of programming languages:

The code for “Hello, World!” in Python, Javascript, Java and C++

Here’s some good news for programmers — your jobs are safe for now. ChatGPT cannot write complex code…yet. The platform is still under construction and it has a long way to go. But, the prediction is that it may be able to generate complex code within the next ten years.

ChatGPT can instead prove useful to programmers. It can help find bugs, correct the code and provide an explanation for it. That’s what we call going the extra mile.

If it can’t answer a specific question, ChatGPT admits it and sounds quite sincere in its apology. We’d say its apology sounds more real than any human apology.

When computers first came to the market, there was a fear that these machines would result in a lot of unemployment. But, in hindsight, we know that wasn’t the case. If anything, computers have made our lives easier and have in fact created more employment opportunities. People just needed to know how to handle one and find out ways to incorporate it into their professional lives. The case could turn out to be the same for ChatGPT — we might figure out ways to adapt and use it to do our jobs better and keep them.

A real-life test worth talking about

One AWS Certified Cloud Practitioner ran some available AWS tests through ChatGPT to see if it could obtain the minimum passing score in each. Out of the 12 exams that it gave, it passed the Cloud Practitioner exam, the Developer and SysOps Administrator exam and the Security exam. However, it failed all professional-level exams. The conclusion was that ChatGPT could possibly be AWS certified.

Limitations

We know ChatGPT can write code — and write AWS exams — but it would need an external prompt to do so. A developer or programmer would have to be very specific about the kind of code they want. Since AI makes mistakes too, a programmer would still have to go through the code produced by ChatGPT to make changes or corrections. and tweak it to suit their preferences. Remember, ChatGPT does not have a mind of its own.

Others are still skeptical about the advancements. Some feel it’s impossible to accurately predict whether ChatGPT would reach such a high level of progress. ChatGPT needs GPUs — Graphic Processing Units — to keep it running. These units are expensive and approximately $3 million a month are required to run the platform. It may not stay free for very long. Once it becomes advanced enough and gets more users, OpenAI may soon turn ChatGPT into a paid platform to cover the running costs. In fact, OpenAI recently released a paid version of the model called ChatGPT Professional with a monthly subscription cost of $42.

Another reason ChatGPT cannot replace programmers is that it cannot decide what code to use for a particular task. Programmers have the skill to structure the program in a specific way and make decisions on what the best way would be to do so. Furthermore, developers and programmers do more than just code. What limits ChatGPT is that it is just a text-based platform and can only help with coding which also may not be 100% accurate.

ChatGPT is not reliable enough. Several users have posted a lot of wrong answers generated by the platform and unless one is well-versed in the questions they ask, ChatGPT could do more harm than good. When asked, the platform itself will list all its flaws (talk about low self-confidence!) and will strictly recommend against blindly relying on it. At least it’s honest — we’ve got to give it that.

ChatGPT can’t and won’t replace Google as some believe. Google is a search engine while ChatGPT generates answers based on preloaded data and doesn’t use the internet. Unlike Google, it cannot give real-time info on things like the stock market, weather, and locations.

ChatGPT is outdated because it contains data from 2021. It apologizes for not knowing about the events of 2022 and suggests checking the news for more information (I mean, who doesn’t know about Elon Musk buying Twitter?!). That may be rectified soon but for now, it can’t give answers on recent events.

ChatGPT cannot think for itself because it is only a large set of instructions. Its answers sound valid and logical but they lack depth — it’s all predictive text but a more advanced version of it. If it produces a wrong answer the first time, it needs to be told the question a couple more times to understand it.

ChatGPT certainly has a significant number of limitations. But never once does it say “I don’t know.” It either says “I don’t have any information on…” or “I don’t have the most up-to-date information” and we love that.

Our verdict

Whether you think it’s a boon or a curse, you have to admit, it’s pretty interesting to play around with regardless of whether its answers may be correct or not. ChatGPT is safe to use only if you use it to elaborate on topics you already know about or to rephrase sentences to make them more lucid.

If your code has a bug, use ChatGPT by all means. We know now programmers, coders, and developers are all safe. So are writers. ChatGPT is still primitive in terms of its knowledge base. It may sound intelligent, but it lacks human creativity and depth. While it undergoes changes, it can be a very effective text-based tool but the advice would be to use it with caution.

The bottom line

You can build your solutions WITH ChatGPT. It can’t build one FOR you.

--

--

Ganit Inc.
analyticaledge

We provide smart solutions at the intersection of hypothesis-based analytics, discovery-driven AI, and new-data insights