Understanding ChatGPT’s Parameters (Part 2)

Mike Onslow
5 min readMay 16, 2023

Deep-sea Diving into the Abyss of ChatGPT Parameters

In our first exciting journey into the heart of ChatGPT, we explored the surface, uncovering treasures like the temperature parameter. Now, we're ready to take a deep dive into the fathomless depths of this powerful AI, uncovering the mysteries of top_p, frequency_penalty, and presence_penalty. So, grab your diving gear, prepare your courage, and let's plunge into the enigmatic ocean of ChatGPT's parameters!

When we wrapped up the prelude to this article: https://medium.com/@mike_onslow/understanding-chatgpts-parameters-getting-started-32ec12b5e51b, we suggested entering

Give me a list of all parameters you can use in ChatGPT and their ranges plus a short sentence that explains what each of them is used for.

So let’s do that now and see what we get…

Your results may vary and GPT-3 and GPT-4 will have slightly different results

So as we can see we get some of the same parameters we got before and some new ones. Let’s quickly review the ones we covered before.

Temperature

In the AI world, temperature is akin to the “spice” in your meal — it determines the flavor of the conversation. A higher temperature value makes the AI more adventurous and creative, and could lead to some unexpected responses!

Max Tokens or Max Length (1–2048):

max_tokens / max_length This parameter controls the maximum number of tokens or words in the generated text. It can be adjusted to generate short, snappy responses or more detailed and comprehensive answers.

GPT-4 uses max_tokensand GPT-3 uses max_length.

IMPORTANT: OpenAI is constantly changing what their ChatGPT interface allows for as far as setting parameters for a conversation. At current, they don’t seem to allow for pre-setting the tone of the conversation.

This means that (at current) you’ll need to set the parameter’s during each prompt, this is still incredibly valuable.

So now that we’ve reviewed where we’ve been, let’s dive into some more of the magic behind the conversational abilities of ChatGPT. From the number of possible responses to the length of the input context, we’ll explore how these parameters can affect the quality and style of the generated text.

The top_p Parameter: Playing Russian Roulette with Words

Remember playing “spin the bottle” or “Russian roulette” in your school days? Well, top_p or nucleus sampling, as it is also known, is somewhat like a game of Russian roulette that ChatGPT plays with words. It decides how risky or safe the AI should play when picking the next word.

Here’s how it works: Imagine all possible next words are in a hat. The more likely a word, the more times it appears in the hat. Now, instead of picking blindly, the AI takes out words one by one, starting with the most likely. As soon as the sum of their probabilities exceeds the top_p value, the AI stops and picks from these words.

A low top_p value makes the game safer - fewer options, more predictability. A high top_p value, on the other hand, makes the game riskier - more options, more randomness, and potentially more fun!

The frequency_penalty Parameter: A Repetition Repellant

Ever had a conversation with someone who repeats the same phrases over and over again? It can be quite dull, right? That’s where the frequency_penalty parameter comes in.

This parameter is like a strict school teacher who penalizes repetition. A high frequency_penalty gives the AI a stern look when it starts repeating words or phrases too often, encouraging it to use more varied language. On the other hand, a low frequency_penalty gives the AI the freedom to repeat itself more frequently.

The presence_penalty Parameter: The Gatekeeper of New Ideas

Last but not least, we have the presence_penalty parameter. This parameter is like the bouncer at the club of ChatGPT's mind. It decides how many new ideas are allowed to join the party.

A high presence_penalty encourages a lively party with plenty of new ideas and topics, leading to a more diverse conversation. A low presence_penalty, on the other hand, prefers a more intimate gathering, keeping the conversation focused and consistent.

Quick Reference: Parameter Values at a Glance

For those of you who prefer a quick cheat sheet, here’s a rundown of the parameters we’ve explored today and their value ranges:

  • top_p: Controls the randomness in the model's choice of next words. Value range: 0.0 (safe, predictable) to 1.0 (risky, random).
  • frequency_penalty: Penalizes the repetition of words or phrases. Value range: -2.0 (encourages repetition) to 2.0 (discourages repetition).
  • presence_penalty: Regulates the introduction of new ideas or topics. Value range: -2.0 (prefers consistent topics) to 2.0 (encourages new topics).

The Final Word

As we resurface from our deep dive, we find ourselves armed with a newfound understanding of ChatGPT’s parameters. With these tools at our disposal, we can shape and mold our conversations with this remarkable AI in ways we never thought possible.

So go forth, intrepid explorers, and experiment! Play with these parameters, see what works for you, and most importantly, have fun. After all, AI is a tool, a companion, and a source of endless entertainment. Happy exploring!

Stay tuned for the next installment of our series, where we’ll put these parameters to the test with some practical examples. Until then, safe travels through the fascinating realm of AI!

--

--

Mike Onslow

Director of Technology at Clarity Voice. Writing about AI, ML, Deep Learning, and finding solutions for growth.