Best Practices for LLM Hyperparameter Tuning

Spheron Staff
spheronfdn
Published in
7 min readJun 7, 2024

When choosing the best large language models (LLMs) for your organization, there are many factors to consider. One important aspect is the model’s parameter count; generally, larger models tend to perform better. You might also look at performance benchmarks or inference tests, which provide quantitative measures of performance and allow you to compare different LLMs.

However, after selecting a model that seems to suit your needs, you can further customize it by tuning hyperparameters. These settings can significantly impact whether an LLM meets or exceeds your expectations.

What are LLM Hyperparameters and Why are They Important?

Hyperparameters are settings that influence the training process of an LLM. Unlike model parameters (or weights), which are adjusted during training, hyperparameters are set before training begins and remain unchanged. They control how the model learns from the training data but do not become part of the final model. Consequently, you can’t determine which hyperparameters were used after training is complete.

Hyperparameters are crucial because they allow you to adjust the model’s behavior to suit your specific needs better. Instead of creating a custom model from scratch, you can fine-tune an existing model through hyperparameter adjustment to achieve the desired performance.

Exploring Different LLM Hyperparameters

1. Model Size

The size of the LLM, which refers to the number of layers in its neural network, is a primary hyperparameter. Larger models generally perform better and can handle more complex tasks because they have more layers and weights, allowing them to learn intricate relationships between tokens. However, larger models are more expensive to train and run, require more data and can be slower. They are also more prone to overfitting, where the model performs well on training data but poorly on new data.

Smaller models, while less powerful, can be more efficient for simpler tasks and easier to deploy on less powerful hardware. They require fewer resources to train and can be optimized further through techniques like quantization and fine-tuning.

2. Number of Epochs

An epoch is a complete pass through the training dataset. The number of epochs determines how often the model processes the entire dataset. More epochs can improve the model’s understanding but can lead to overfitting if too many are used. Conversely, too few epochs can result in underfitting, where the model hasn’t learned enough from the data.

3. Learning Rate

The learning rate controls how quickly the model updates in response to errors during training. A high learning rate speeds up training but can cause instability and overfitting. A low learning rate increases stability and improves generalization but makes training slower. Often, it’s beneficial to adjust the learning rate as training progresses using schedules like time-based decay, step decay, or exponential decay.

4. Batch Size

Batch size is the number of training examples the model processes at once. Larger batch sizes speed up training but require more memory. Smaller batches are less demanding on hardware but can improve how thoroughly the model learns from each data point.

5. Max Output Tokens

Also known as max sequence length, this hyperparameter sets the maximum number of tokens the model can generate in its output. More tokens allow for more detailed and coherent responses but increase computational and memory demands. Fewer tokens can reduce these demands but may result in incomplete or less coherent responses.

6. Decoding Type

Decoding is the process of generating the model’s output from its internal representations. There are two main types: greedy decoding, which selects the most probable token at each step, and sampling decoding, which introduces randomness by choosing from a subset of probable tokens. Sampling can create more varied and creative outputs but increases the risk of nonsensical responses.

7. Top-k and Top-p Sampling

When using sampling decoding, top-k and top-p are additional hyperparameters that control how tokens are selected. Top-k sampling limits the model to choosing from the top k tokens with the highest probabilities. For example, if top-k is set to 5, the model will select from the five most probable tokens. This helps ensure variability while maintaining a focus on likely options.

Top-p sampling (or nucleus sampling) dynamically adjusts the selection pool based on cumulative probability, ensuring the chosen tokens make up a specified probability mass (e.g., 90%). This method balances randomness and coherence by allowing the model to consider a varying number of tokens based on their probabilities.

Certainly! Let’s consider the sentence, “She decided to start her day by…”.

Now, let’s look at five possible ways to end this sentence, each beginning with a different token:

We will assign each of the initial tokens a probability as follows:

Top-k Sampling

If we set the top-k sampling value to 2, only ‘reading’ and ‘going’ will be considered in the sampling subset. Setting it to 5 would include all the options.

Top-p Sampling

For top-p sampling, if the value is set to 0.6, ‘reading’ and ‘going’ will be included, as their combined probabilities are 0.52 (0.28 + 0.24). Including ‘cooking’ would make the cumulative probability 0.72 (0.28 + 0.24 + 0.20), which exceeds the threshold, thus excluding ‘cooking’, ‘meditating’, and ‘writing’.

If both sampling values are set, top-k takes precedence, ensuring all probabilities outside the set threshold are set to 0.

8. Temperature

Temperature is a parameter that influences the range of possible output tokens and the model’s “creativity,” similar to top-k and top-p sampling values. It is represented by a decimal number between 0.0 and 2.0. A temperature of 0.0 results in greedy decoding, where the token with the highest probability is always selected. Conversely, a temperature of 2.0 allows for maximum creativity.

Low temperatures amplify the differences between probabilities, making high-probability tokens even more likely to be selected, resulting in more predictable and dependable responses. High temperatures, on the other hand, cause token probabilities to converge, giving less likely tokens a better chance of being selected, thereby increasing randomness and creativity.

9. Stop Sequences

Stop sequences provide a way to control the length of an LLM’s response, alongside the max output tokens parameter. A stop sequence is a specific string of one or more characters that stops the model’s output when encountered. A common example is a period (full stop).

Alternatively, you can use a stop token limit, an integer value that defines the length of the output. For example, setting a stop token limit to 1 would stop the generated output at a sentence, while a limit of 2 would constrain the response to a paragraph. These controls are useful for managing inference, especially when budget is a concern.

10. Frequency and Presence Penalties

Frequency and presence penalties are hyperparameters that discourage repetition and encourage diversity in a model’s output. A decimal between -2.0 and 2.0 represents both penalties.

The frequency penalty reduces the probabilities of tokens that have been recently used, making them less likely to be repeated. This helps produce a more diverse output by preventing repetition. The presence penalty, applied to tokens that have appeared at least once, works similarly but is proportional to the frequency of token usage. While the frequency penalty discourages repetition, the presence penalty encourages a wider variety of tokens.

What is LLM Hyperparameter Tuning?

LLM hyperparameter tuning involves adjusting various hyperparameters during the training process to find the optimal combination for generating the best output. This process often involves significant trial and error, meticulously tracking each hyperparameter application and recording the resulting output. Manually performing this tuning is time-consuming, leading to the development of automated methods to streamline the process.

The three most common methods for automated hyperparameter tuning are random search, grid search, and Bayesian optimization:

  • Random Search: This method randomly selects and evaluates combinations of hyperparameters from a specified range of values. It is simple and efficient, capable of exploring a large parameter space. However, its simplicity means it may not find the optimal combination and can be computationally expensive.
  • Grid Search: This method systematically searches through all possible combinations of hyperparameters from a given range. While resource-intensive, like random search, it ensures a more systematic approach to finding the optimal set of hyperparameters.
  • Bayesian Optimization: This method uses a probabilistic model to predict the performance of different hyperparameters and selects the best based on these predictions. It is more efficient than grid search and can handle large parameter spaces with fewer resources. However, it is more complex to set up and may be less reliable in identifying the optimal set of hyperparameters than grid search.

Advantages of Automated Hyperparameter Tuning

Automated hyperparameter tuning offers several significant advantages for machine learning model development. Firstly, it saves time and effort by systematically searching through the hyperparameter space, thereby eliminating the need for manual, trial-and-error approaches. This leads to the discovery of more optimal hyperparameter configurations that can enhance model performance and accuracy. Additionally, automated tuning leverages sophisticated algorithms, such as Bayesian optimization, grid search, and random search, which can more efficiently explore the hyperparameter landscape.

This results in faster convergence to the best settings. Furthermore, automated tuning can be easily integrated into existing machine learning pipelines, ensuring a seamless workflow and enabling continuous improvement through iterative refinements. By reducing the reliance on human expertise, it democratizes access to advanced model tuning, making it accessible even to those with limited experience in machine learning.

Conclusion

Hyperparameter tuning, while often considered a subset of fine-tuning, is a crucial discipline that warrants separate attention. By configuring various hyperparameters, as detailed in this guide, and observing how your chosen LLM responds, you can enhance the performance of base models to better suit real-world applications.

Originally published at https://blog.spheron.network on June 7, 2024.

--

--