Generative AI - Mastering the Language Model Parameters for Better Outputs

Parameters in large language models are crucial because they help control the model's behavior.

Sascha Heyer
Google Cloud - Community

--

PaLM 2 and GPT provide parameters for temperature, token limit, top-k, and top-p. This Article explains those parameters with everyday language and relatable examples.

Before we dive into the parameters, it's crucial to understand the concept of tokens.

The Language of Tokens

In the world of large language models, a token can be as short as one character or as long as one word, depending on the language and the specific word. For instance, in English, a is one token, apple is another, and apples is yet another.

source: author

When you give a prompt to the model, it doesn't read the whole sentence at once. Instead, it breaks down your input into these tokens. It then analyzes the tokens, understands their sequence, and uses this understanding to generate a response.

The model also uses tokens not only as input but also as output to generate responses. It doesn't write whole sentences at once. It generates one token at a time based on the previous token and the input token it has read.

What are these…

--

--

Sascha Heyer
Google Cloud - Community

Hi, I am Sascha, Senior Machine Learning Engineer at @DoiT. Support me by becoming a Medium member 🙏 bit.ly/sascha-support