GPT-3’s AI Invented Cocktails You Might Actually Want to Try

A quick demo of OpenAI’s powerful new language model

Abhi Reddy
The Startup
5 min readAug 7, 2020

--

Courtesy of Matthias Merges, Billy Sunday.

Why the hype?

A few weeks ago, OpenAI released GPT-3, the latest version of their language generation model. Examples of GPT-3 doing everything from writing fiction to generating working JavaScript quickly went viral, leading to widespread press coverage with headlines like, “What Is GPT-3 and Should We Be Terrified?”

Farhad Manjoo at the New York Times put it this way:

I’ve never really worried that a computer might take my job because it’s never seemed remotely possible. Not infrequently, my phone thinks I meant to write the word “ducking.” A computer writing a newspaper column? That’ll be the day.

Well, writer friends, the day is nigh. This month, OpenAI, an artificial-intelligence research lab based in San Francisco, began allowing limited access to a piece of software that is at once amazing, spooky, humbling and more than a little terrifying.

What makes GPT-3 so good? The model was trained on a staggeringly large volume of text — according to The Verge, all of English Wikipedia makes up just 0.6% of the total dataset — and fine-tuned with 175 billion parameters. Unlike other learners, GPT-3 is able to produce fairly good output for a variety of specialized tasks without much extra training.

For now, GPT-3’s API is only accessible via a private beta. I recently got an invite and couldn’t wait to put it to the test. In particular, I was curious how the model would handle a task that required both specialized knowledge and creativity, especially in an area where I at least have a shot of judging the quality of its answers.

Mixology with GPT-3

To start GPT-3 off, I trained it with a few examples from Billy Sunday, a Chicago institution and my all-time favorite cocktail bar. I gave the model some context and listed the main ingredient, name, and recipe for a few real Billy Sunday cocktails:

After feeding all 7 seasonal cocktails on Billy Sunday’s menu into the API, I prompted it with a few more ingredients to see what it would come up with. GPT-3’s output is in bold:

Its suggestions aren’t perfect — for example, a Berliner is already an established cocktail with gin and vermouth — but overall these drinks wouldn’t feel out of place on the menu at any trendy cocktail bar.

Kicking it up a notch

GPT-3’s API includes a neat parameter called temperature, a value between 0 and 1 that controls the output’s randomness. OpenAI likens it to adjusting creativity. The examples above were generated with a very low temperature, 0.1.

When I set the temperature to 0.1 and prompted GPT-3 to make a cocktail using Pimm’s, a fruit liqueur, it made a predictable choice and pretty much suggested a standard Pimm’s Cup (though the right ingredient is ginger ale, not ginger):

After I turned up the temperature to 0.5, GPT-3 decided to add a twist on the classic cocktail:

And at a temperature of 0.9, GPT-3 got really creative:

Here are a few interesting recipes that GPT-3 invented with the temperature cranked up to 0.9:

Final Thoughts

It’s truly impressive how easily GPT-3 can generate plausible text, but it isn’t taking over the world any time soon. I cherry-picked the most interesting examples above, but the model made a lot of obvious errors. For, example, it sometimes reused the names of popular drinks or suggested entire cocktails, like “martini” or “caipirinha”, as ingredients in its creations. On rare occasions, the model even included nonsensical ingredients like “iwa kachinoki washi tango” (which it wanted to use in a Jameson-based cocktail it dubbed “Jameson and Me”).

Most importantly, these drinks may sound intriguing, but who knows if they actually taste good?

Bartenders are safe for now. (Courtesy of Matthias Merges, Billy Sunday.)

GPT-3’s biggest limitation is that it doesn’t have any awareness of what’s correct and what isn’t. As far as I can tell, the model is just identifying patterns, rejiggering text fragments from its training data, and parroting them back to us. I’m interested to see if GPT-3 can be fine-tuned enough to be deployed where responses need to be consistently accurate, such as customer support.

For now, I think the more exciting opportunity is using AI to augment human creativity. As the saying goes, nothing is original. Innovation is essentially making new connections using our past experiences. Isn’t GPT-3 just a turbocharged version of this process?

GPT-3 ends up reminding me of a quote from the movie Limitless (stick with me here):

Information from the odd museum show, a half-read article, some PBS documentary — it was all bubbling up in my frontal lobes, mixing itself together into a sparkling cocktail of useful information.

One day soon, an AI might be the co-author of your favorite drink. Until then, I’ll keep depending on the bartenders at Billy Sunday.

--

--