Is OpenAI’s GPT-3 API Beta Pricing Too Rich for Researchers?

Synced
SyncedReview
Published in
3 min readSep 4, 2020

Few in the natural language processing (NLP) community expected the world’s most powerful large language model to come cheap, but some are worried the hefty price tag could put it out of reach of startups.

OpenAI’s 175 billion parameter language model GPT-3 (Generative Pre-trained Transformer 3) turned heads in the NLP community when it was released in June, and now it’s back in the spotlight. A Reddit post this week by independent writer and researcher Gwern Branwen detailed the pricing plan OpenAI has provided to GPT-3 Beta API users. The scheme, which goes into effect on October 1, has already raised as many questions as it has answered.

The plan has four tiers: Explore, Create, Build, Scale. The Explore plan allows free API access for up to 100,000 “tokens” (there is also an exploitable trial offer). The Create option provides 2 million tokens and costs US$100 per month, while Build gets you 10 million tokens for US$400 a month. Pricing for the Scale plan will be customized to user needs.

What’s with all these tokens? Tokens are an instance of a sequence of characters grouped together as a useful semantic unit for NLP. The OpenAI tiers’ token supplies include both prompt and completion tokens. Basically, good NLP needs a lot of tokens — training the GPT-3 model for example required a whopping 499 billion tokens. The 2 million monthly tokens provided in the GPT-3 Create plan would cover the 900,000 words in the complete works of William Shakespeare.

Although OpenAI hasn’t yet officially announced the GPT-3 pricing scheme, Branwen’s sneak peek has piqued the interest of the NLP community. While many are eager to take the model for a spin at any cost, others, such as Murat Ayfer, see the prices as prohibitively high. Ayfet is the creator of PhilosopherAI.com, a website that generates philosophical arguments from user prompts. He posted on Reddit, “PhilosopherAI.com currently has about 750,000 queries put into it. About half got past the content filter, so let’s say ballpark 400,000 outputs at an average 1000 tokens each… (because prompts are chained, so a lot of tokens used in the background). That makes for 400 million tokens in 2 or 3 weeks, which puts me at like $4000/mo minimum.”

Ayfer’s concerns are echoed in Georgia Institute of Technology Associate Professor Mark Riedl’s tweet: “Seems like they are willing to work out deals. But in general it does put pressure on startups to have income streams or capital early. The scale seems tipped toward established players… I really don’t have any idea what the cost of a ‘token’ should be.”

The GPT-3 API runs models with weights from the GPT-3 family with many speed and throughput improvements. The pricing plan will take effect on October 1 in limited private beta.

Reporter: Fangyu Cai | Editor: Michael Sarazen

This image has an empty alt attribute; its file name is 0*PsYA8hkiaWVkNWN7

Synced Report | A Survey of China’s Artificial Intelligence Solutions in Response to the COVID-19 Pandemic — 87 Case Studies from 700+ AI Vendors

This report offers a look at how China has leveraged artificial intelligence technologies in the battle against COVID-19. It is also available on Amazon Kindle. Along with this report, we also introduced a database covering additional 1428 artificial intelligence solutions from 12 pandemic scenarios.

Click here to find more reports from us.

We know you don’t want to miss any story. Subscribe to our popular Synced Global AI Weekly to get weekly AI updates.

This image has an empty alt attribute; its file name is 0*JKRwl2n_SruPxc28

--

--

Synced
SyncedReview

AI Technology & Industry Review — syncedreview.com | Newsletter: http://bit.ly/2IYL6Y2 | Share My Research http://bit.ly/2TrUPMI | Twitter: @Synced_Global