Open AI’s GPT-4: Here’s What You Need to Know

AKHIL THULASEEDHARAN
8 min readMar 15, 2023

--

opneai.com/research

OpenAI Releases GPT-4: A Multimodal AI That Is State-Of-The-Art

Have you heard this latest news?

GPT-4 is a language model that can generate text and even handle both text and image inputs. It’s more reliable, creative, and capable of handling much more nuanced instructions than its predecessor, GPT-3.5.

But is it worth the hype?

Let’s find out! In this article, we’ll dive deep into the features, capabilities, pricing, and limitations of this new AI superstar.

1️⃣What is GPT-4?

GPT-4 is like the cool kid on the block, who is more reliable, and creative.

Open AI research paper: Performances in exams that were originally designed for humans.

❓But wait, did you know that GPT-4 has already been adopted by some major players in the tech industry?

🔶Microsoft’s Bing Chat is already using GPT-4 as its chatbot technology co-developed with OpenAI.

🔶Stripe is using GPT-4 to scan business websites and provide customer support staff with a summary of the site.

🔶Duolingo built GPT-4 into its new language learning subscription tier.

🔶Morgan Stanley is creating a GPT-4-powered system that’ll retrieve information from company documents and provide it to financial analysts.

🔶And Khan Academy is leveraging GPT-4 to build some sort of automated tutor.

Now that you know how early adopters are using GPT-4, the question remains: How will it make a difference in our lives?

Let’s delve into its features, capabilities, pricing, and limitations to find out more!

2️⃣Why GPT-4?

“In a casual conversation, the distinction between GPT-3.5 and GPT-4 can be subtle,” OpenAI wrote in a blog post announcing GPT-4.

“The difference comes out when the complexity of the task reaches a sufficient threshold — GPT-4 is more reliable, creative and able to handle much more nuanced instructions than GPT-3.5.”

🔷For instance, GPT-4 can write a poem mixing English and German, a feat that GPT-3.5 couldn’t even come close to accomplishing.

Reddit

📢Attention creative minds!!

This is a warning sign: we thought repetitive and algorithmic tasks would be the first ones to get affected, but on the contrary, it seems creative jobs are the first ones to get affected.

Technology is changing the world at lightning speed and taking over jobs, but don’t panic just yet! As the saying goes, “robots may be taking over the world, but humans still have the ability to press the off button.” So let’s embrace the technological revolution with open arms and a sense of humour, and remember to keep our unique human touch alive.

🔷Rather than the classic ChatGPT personality with a fixed verbosity, tone, and style, developers (and soon ChatGPT users) can now prescribe their AI’s style and task by describing those directions in the “system” message. System messages allows developers and users to provide specific instructions or hints to the AI model during a conversation.

For example, a developer could use a system message to tell the AI to focus on a specific topic, use a certain tone or style, or even suggest specific words or phrases to include in the response.

Example: System Message from Open AI

🔷Another interesting fact is its capability to bypass AI detection. It’s both fascinating and concerning to see this feature of GPT-4. Unfortunately, it’s becoming increasingly difficult to predict if the text is AI-generated.

However, as technology continues to evolve, as content writers, we must take responsibility for the content we create and ensure that we use AI technology ethically and responsibly.

Sometimes, remember protocols are there for a reason :)

Reddit

I read somewhere “Humanity is acquiring all the right technology for all the wrong reasons.” With GPT-4’s incredible capabilities, it’s important to consider both the benefits and potential risks of AI. So, what’s next for GPT-4 and the world of AI?

🔷Does GPT-4 Accept Images?

Well, it looks like we need to clear the air.

Contrary to what some tech blogs are saying, GPT-4 is not quite ready to accept images as input. At least, not yet. According to OpenAI, the implementation of GPT-4 in ChatGPT currently only supports text input. So, if you were hoping to throw some pictures and waiting to get some memes, you might have to wait a bit longer.

Of course, this isn’t to say that GPT-4 won’t eventually have image input capabilities. In fact, OpenAI is already working on it. As they put it, image input is currently in a research preview phase, and they’re collaborating with a partner to prepare it for wider availability.

But for now, it seems we’ll just have to stick with good old-fashioned text. And let’s be honest, there’s still plenty of fun to be had with words alone.

People are giving prompts like “how to eat an apple in the style of Donald Trump”

3️⃣Pricing of GPT-4

Want to use GPT-4? You better have some spare change, because it’s not cheap — but it’s worth it.

Let me explain this in layman’s terms since many of us are still not aware of these jargons that they use.

❓What are tokens:

Tokens are essentially units of computational resources that allow OpenAI to manage the usage of their AI model and ensure that it is not overwhelmed by too many requests at once.

There are two types of tokens in open AI.

Prompt tokens are used to provide the initial text input to GPT-4. They are the starting point for the language model and help guide its output. Prompt tokens can be thought of as the “prompt” or the “seed” for the generated text.

Completion tokens, on the other hand, are used to specify the length of the generated text. They tell GPT-4 how many additional tokens (i.e., words or characters) to generate after the prompt tokens. Completion tokens can be thought of as the “output” or the “result” of the generated text.

In summary, prompt tokens are the input to GPT-4 and completion tokens are the output.

OpenAI charges $0.03 for every 1,000 prompt tokens and $0.06 for every 1,000 completion tokens that GPT-4 generates.

There are also some limits to how much you can use GPT-4 at once. You can only generate up to 40,000 tokens per minute and make up to 200 requests per minute.

📌So, let’s say you want to generate a 500-word story using the GPT-4 API. This might require, for example, 10,000 prompt tokens and 20,000 completion tokens. The cost of this would be:

10,000 prompt tokens * $0.03 per 1k prompt tokens = $0.30

20,000 completion tokens * $0.06 per 1k completion tokens = $1.20

Total cost = $0.30 + $1.20 = $1.50

In addition to the cost, there are also default rate limits for using the API. This means that you can only make a certain number of requests per minute, and each request can only use a certain number of tokens. The default rate limits are 40k tokens per minute and 200 requests per minute.

So, if you wanted to generate 5 stories using the example above, each story requiring 10,000 prompt tokens and 20,000 completion tokens, you would need to make 5 requests to the API.

Each request would use 30,000 tokens (10,000 prompt tokens + 20,000 completion tokens), which is within the rate limit of 40k tokens per minute. However, if you tried to make more than 200 requests per minute, you would hit the rate limit and the API might not respond or might start rejecting your requests.

4️⃣Limitations of GPT-4

There’s no such thing as a perfect model! Even though GPT-4 is the talk of the town, it still has some limitations.

Despite its capabilities, GPT-4 has similar limitations as earlier GPT models. Most importantly, it “hallucinates” facts and makes reasoning errors. When you’re using language model outputs, especially in important situations, it’s important to be careful and follow the specific rules that are needed for that situation.

Last but not least, It’s good to remember the words of Peter Drucker who is well known as the father of management thinking “The most important thing in communication is hearing what isn’t said.”

5️⃣Concerns

OpenAI refuses to provide any details about GPT-4’s development because of the “competitive landscape.”

Open AI research paper

The term “competitive landscape” refers to the market conditions and competition within an industry or field.

So, OpenAI is not providing any details about GPT-4’s development because they do not want to disclose information that may give their competitors an advantage.

What happened to the nonprofit that wanted to democratise AI for all?

Sam Altman, a person in charge at OpenAI, once said that they tried to start their company using public funds and partnerships with universities, but nobody seemed interested.

Sometimes capitalism is the only way to support innovation.
I guess even AI needs to pay the bills.

Microsoft recently laid off its ethics and society team, which was responsible for ensuring that the company’s AI principles were closely tied to product design.

This move has left Microsoft without a dedicated team to oversee the ethical implications of AI and ensure that their products align with their values.

This is a concerning development, especially considering Microsoft’s position as a leader in making AI tools accessible to the mainstream.
Without an ethics team, Microsoft might have to rely on ‘Cortana’ to tell them what’s right and wrong.

In conclusion GPT-4 is more powerful, not that expensive considering its worth. As we move forward with AI technology, let’s remember to keep our unique human touch alive and use technology for the right reasons. And as the saying goes, “Robots and AI are transforming our world, but human creativity and ingenuity will always be essential.”

🎁Reference: Open AI Research

--

--

AKHIL THULASEEDHARAN

I write on Content Marketing🔹 AI Trends🔷 Life Lessons for more visit: akhilpillai.com