GPT-4 — The New , The Upgrades & The Unknowns

Dayanithi
The Modern Scientist
3 min readMar 16, 2023

GPT-3 were limited to text: They could read and write but that was about it (though more than enough for many applications). GPT-4, however, can be given images and it will process them to find relevant information. Along with that now it can take prompts not only in English but in more than 15 different languages. Which is a huge step , considering that it took only 6 months to make this progress .

Photo by Emiliano Vittoriosi on Unsplash

Now let’s see the new milestones of the GPT project :

The casual and creative conversation :

There isn’t much difference between GPT-3 and GPT-4 when it come to a casual conversation. But GPT-4 takes a huge leap when it comes to creative /complex tasks and it’s ability to handle more nuanced instructions , leaving GPT-3 in the dust . So , as always GPT-4 as made to take a series of tests to test it’s improvements over GPT-3 , without any training . GPT-4 exhibits human-level performance on the majority of these professional and academic tests.

All this is possible because GPT-4 is trained on a much bigger dataset , which enables it to give 40% more factual responses .

Multilingual :

In addition to that , the language model has been built to recognize and answer prompts in different languages that opens up the door for non-English speaking population too . For now it can be accessed in about 24 languages .

Not easily prone to jailbreak :

Previous models are prone to certain prompts where they either choked to answer or let out weird answers . Which often led to them tagged as bad AI or a AI takeover conspiracy . But thanks to those prompts , that GPT-4 has better immunity to such prompts because it has been trained on lots and lots of such malicious prompts . Therefore making this new model much more better on factuality and staying within the guardrails .

Also the model’s tendency to respond to prompts for disallowed content is decreased by 82% compared to the previous version, and GPT-4 responds to sensitive requests (e.g., medical advice and self-harm) in accordance with the policies .

Visual Inputs :

Now it can take images as well as text as prompts . With it’s ability to understand images ,It can give context to the image input (Or) If you give a graph / chart as an input it will summarize it for you . It can also code with a picture as a prompt , replicating a hand drawn layout of a website into an actual working website.

Steerability:

GPT-3 had one tone / voice . But GPT-4 can take up different tone / personality that’s relevant to the topic and the prompts given to it .

However it do have certain limitations ,while GPT-4 has come a long way from GPT-3 , it does have some traits of it’s predecessors . The tendency to confuse facts and make reasoning errors which has been termed as “hallucination” still remains as a thorn in it’s development . But GPT-4 has made significant progress by 40% and has lesser chances of hallucinating .

That’s all the major improvements along with the new Plus option that charges 20$ per month to use all these new features. To be frank they are just gonna improve and built on top to better efficiencies , and might even do video gen prompt in the future .

--

--