GPT-4 has been severely weakened. You will never write an extra line of code if you are lazy and fishing. OpenAI has intervened in the investigation.

Piyush C. Lamsoge
7 min readNov 30, 2023

--

GPT-4 was once again “group attacked” by netizens because it was “ridiculously lazy”!

Some netizens want to develop an application on the Android system that can interact with the OpenAI API in real time.

So I sent the method example link to GPT-4 and asked it to write code in Kotlin language:

Unexpectedly, after communicating with GPT-4 for a long time, GPT-4 could not provide a complete code that could run normally.

Instead, he explained “what should be done”.

This really annoyed netizens, who tweeted that “I could write code two weeks ago, but now I can’t.”

The result suddenly exploded with more netizens:

Finally someone is looking into this.

Everyone repeatedly expressed that they encountered similar problems:

According to netizens, this situation seems to have started to occur since the GPT-4 major update on November 6 .

At present, an OpenAI employee has responded and stated that the problem has been fed back to the team .

Just the code, the complete code!

It’s no wonder that netizens “break the defense”. The netizen above sent the method example link to GPT-4 and asked it to write code in Kotlin language.

The reply given by GPT-4 was like this, listing 7 steps, all explaining “what should be done”:

The code is not given until the end, but it is just a basic “template”:

Netizens were quite patient at first and told him, “No explanation is needed, just give me the code, complete code, code that can run 100% normally”:

As a result, GPT-4 opened his mouth to explain and give examples:

Netizens were so angry that they directly interrupted it and emphasized again, “Don’t explain, give me the code”:

GPT-4 really understood it now. It slightly changed the template above and sent it out:

This was the beginning of the scene, and netizens had no choice but to post and complain.

In response to GPT-4’s reply, netizens roared: What have they done to you? Sorry you were weakened.

GPT-4 also looks innocent at this moment🥺.

Among the netizens who came out to complain one after another, some even said that they no longer use ChatGPT.

AI image editor dingboard CEO@kache (yacine) also posted a complaint the day before, with 157,000+ views:

For the past week and a half I’ve been writing “naive” code because GPT-4 doesn’t follow instructions that well.

What a coincidence, if we count it according to the “one and a half weeks” mentioned by netizens, the time coincides with the Ultraman Zhenhuanden incident.

kache (yacine) also has a post full of emotions, “Please give me the old GPT-4 back”:

This netizen said “I understand you”:

It used to make good guesses, now it gives me ten reasons why it can’t make good guesses. Last week I yelled “f*ing do it!!” into the chat box in record numbers.

For a time, GPT-4’s “laziness” became the target of many netizens’ crusade.

Wharton Business School professor Ethan Mollick couldn’t stand it anymore and tested it himself, and the results seemed to show that it was true.

Ethan Mollick repeated a series of analyzes previously done using Code Interpreter.

GPT-4 knew what to do, but kept saying “go get the job done.” As a result, what was originally one step turned into many steps, and some of them were strange.

Ethan Mollick was also speechless at this time.

What happened to GPT-4? The reason behind it is still unknown, and netizens are also speculating.

OpenAI Staff: Feedback has been given to the team

Ethan Mollick is still very strict and believes that even this is not enough to prove that GPT-4 is getting dumber. He speculates that this may be a temporary problem caused by excessive system load.

If you encounter this problem on a mobile phone (mobile device), it may be because the mobile version system prompts, instructing ChatGPT to generate shorter and more concise answers. My testing was conducted on the web version.

Someone also posted a discussion on Reddit. One post pointed out that “it’s not that the new version of GPT-4 is lazy, it’s just that we used it wrong”:

The article points out that after GPT-4 underwent a major update on the 6th of this month, the basic version has no custom prompts, which results in GPT-4 having no predefined “path” to guide its behavior.

This makes it very versatile, but its output is also somewhat “directionless” by default.

One solution is to use the new custom GPT feature (GPTs) provided after the update to set a dedicated GPT for each job.

Some netizens also shared “little tips” one after another:

The game-changing aspect of the new GPT-4 is the amount of code it can interpret at once. It may be useful to explicitly say something like “Please write this test completely .” At the same time, it is also helpful to clearly state “don’t rewrite code that has already been written”, so that tokens can be saved and the model can focus on producing new outputs. I also find that adding a “think step by step” prompt adds some planning text at the beginning, which helps the subsequent output to be better contextualized.

However, some netizens said that when they use it, they will leave some “to-do items” anyway:

This netizen even bluntly said that GPT-4 now seems to have Alzheimer’s disease:

OpenAI implies that the new version of GPT-4 is very good at following instructions, but this is not the case. I have been using GPT-3, 3.5 and later 4 since the beginning and have never seen this level of Alzheimer’s.

Under the fierce complaints from netizens, OpenAI employees also responded.

At first, netizens were asked to provide some specific examples, saying that they would study it and it would be possible to fix these problems in the next iteration of the model version.

As soon as these words came out, more netizens “reported faults.”

will depue responded again:

Thanks for the feedback, all examples here will help us solve this problem faster. I’ve just forwarded it to the team and will keep you posted on the follow-up.

It seems that the official follow-up response will have to wait for a while. Have your family members encountered similar situations recently?

--

--

Piyush C. Lamsoge

I'm highly motivated and dedicated student of Machine Learning, Natural Language Processing, and Deep Learning.