Adam Optimizer Explained & How To Use In Keras, PyTorch & TensorFlow

Neri Van Otten
8 min readMar 1, 2023

--

What is the Adam optimizer?

The Adam optimizer is a popular optimization algorithm used in machine learning for stochastic gradient descent (SGD)-based optimization. It stands for Adaptive Moment Estimation and combines the best parts of two other optimization algorithms, AdaGrad and RMSProp.

The key idea behind Adam is to use a combination of momentum and adaptive learning rates to converge to the minimum of the cost function more efficiently. During training, it uses the first and second moments of the gradients to change the learning rate on the fly.

The first moment is the mean of the gradients, and the second moment is the variance of the gradients.

Adam maintains an exponentially decaying average of past and squared gradients and uses them to update the parameters in each iteration.

This allows Adam to converge to the minimum of the cost function faster and more efficiently than traditional gradient descent methods.

Adam also includes a bias correction mechanism to ensure that the initial estimates of the moments are close to zero.

The main hyperparameters of Adam are the learning rate, beta1 (the exponential decay rate for the first moment estimate), and beta2 (the exponential decay rate for the second moment estimate). These hyperparameters can be tuned for specific problems to achieve optimal results.

The key idea behind Adam is to use a combination of momentum and adaptive learning rates to converge to the minimum of the cost function more efficiently.

Hyperparameters tuning of the Adam optimizer

Several configuration parameters for the Adam optimizer can be changed to fine-tune its performance. The most common parameters are:

  1. Learning rate (lr): This parameter controls the step size at each iteration during gradient descent. A more significant learning rate can help the optimizer converge faster but may also cause it to overshoot the optimal solution.
  2. Beta1 (beta_1): This parameter controls the exponential decay rate for the first moment estimates. It is typically set to 0.9 but can be adjusted to a trade-off between stability and responsiveness.
  3. Beta2 (beta_2): This parameter controls the exponential decay rate for the second-moment estimates. It is typically set to 0.999 but can be adjusted to a trade-off between stability and responsiveness.
  4. Epsilon (epsilon): This parameter is a small constant added to the denominator to avoid division by zero. It is typically set to 1e-8.
  5. Decay (decay): This parameter controls the learning rate decay over time. It is typically set to 0, meaning the learning rate remains constant.

Here’s an example of how to set these parameters in TensorFlow:

import tensorflow as tf

adam = tf.keras.optimizers.Adam(
learning_rate=0.001,
beta_1=0.9,
beta_2=0.999,
epsilon=1e-8,
decay=0.0
)

The code shows how to create an instance of the Adam optimizer and set the learning rate , beta1 , beta2 , epsilon , and decay parameters.

Remember that the default values for these parameters are often enough for many applications, so you can keep them as they are to start with.

Advantages of the Adam Optimizer

The Adam optimizer has several advantages over other optimization algorithms.

  1. Adaptive learning rates: Adam adapts each parameter’s learning rate based on the gradients’ first and second moments. This allows it to automatically adjust the step size for each parameter, making it well-suited for sparse and noisy data.
  2. Faster convergence: Adam combines the benefits of the Adagrad and RMSprop optimizers using adaptive learning rates and momentum. This often leads to faster convergence and better performance than other optimization algorithms.
  3. Robustness to hyperparameters: Adam is relatively insensitive to using hyperparameters such as the learning rate and momentum. This makes it easier and more robust to different data types and architectures.
  4. Suitable for large datasets: Adam is well-suited for large datasets and high-dimensional parameter spaces, as it efficiently computes and stores the first and second moments of the gradients.
  5. Widely used and supported: Adam is a popular optimization algorithm widely used and kept in popular deep learning frameworks such as TensorFlow and PyTorch.

Overall, Adam is a robust optimization algorithm with several advantages over other optimization algorithms. This makes it a popular choice for deep learning applications. But tuning the hyperparameters carefully and monitoring the training process for the best results is still essential.

Disadvantages of the Adam optimizer

Even though the Adam optimizer has some good points, there are also some possible bad points to think about:

  1. Memory requirements: Adam requires storing the first and second moments of the gradients for each parameter, which can be memory-intensive for large models with many parameters.
  2. Susceptible to noise: Adam’s adaptive learning rate can make it sensitive to noise in the gradient estimates, especially for sparse data. This can lead to suboptimal convergence or even divergence in some cases.
  3. Biased estimates: Adam’s first and second-moment estimates are biased towards zero, especially during the early stages of training. This can affect the convergence of the optimizer and may require more iterations to reach the optimal solution.
  4. Hyperparameter sensitivity: While Adam is relatively insensitive to the choice of hyperparameters compared to other optimization algorithms, it still requires careful tuning of the learning rate, beta parameters, and epsilon to ensure optimal performance.
  5. Not guaranteed to converge: Like other optimization algorithms, Adam is not guaranteed to converge to the global optimum and may get stuck in local optima or saddle points.

It’s important to remember that many of these possible problems can be fixed or lessened by tuning the optimizer’s hyperparameters, keeping an eye on the training process, and carefully choosing the learning rate schedule.

Overall, Adam remains a robust and widely used optimization algorithm for deep learning applications. But you might still be wondering what other optimizers can you use.

What are the alternatives?

Depending on the needs of the deep learning application, several optimization algorithms can be used instead of the Adam optimizer. Some common alternatives include:

  1. Stochastic Gradient Descent (SGD): This is a classic optimization algorithm widely used in deep learning. SGD updates the weights by taking a step in the opposite direction of the gradient of the loss function with respect to the weights. While it can be slower to converge than Adam, it has lower memory requirements and is less susceptible to noise.
  2. Adagrad: This optimization algorithm adapts the learning rate of each parameter based on historical gradient information. Adagrad performs well on sparse data and is less sensitive to the choice of hyperparameters than other optimization algorithms.
  3. RMSprop: This optimization algorithm is similar to Adagrad but uses an exponentially decaying average of the past squared gradients to adjust the learning rate. RMSprop is particularly effective for deep learning models with recurrent neural networks.
  4. Adadelta: This optimization algorithm is a variant of RMSprop that uses an adaptive learning rate based on the ratio of the past and current gradients. Adadelta can converge faster than RMSprop and is less sensitive to the choice of hyperparameters.
  5. Nadam: This variant of Adam incorporates Nesterov momentum into the optimization algorithm. Nadam can converge faster than Adam on some types of data but may be more sensitive to the choice of hyperparameters.

Overall, the choice of optimization algorithm will depend on the specific requirements of the deep learning application, including the size and complexity of the model, the type of data, and the desired training time.

Experimenting with different optimization algorithms and hyperparameters is often helpful in finding the best combination for a particular application.

Example implementations in Python

1. Keras Adam optimizer

Here’s an example of how to use the Adam optimizer in Keras:

from keras.models import Sequential
from keras.layers import Dense
from keras.optimizers import Adam

# Define your model architecture
model = Sequential()
model.add(Dense(64, activation='relu', input_dim=100))
model.add(Dense(1, activation='sigmoid'))

# Compile your model and specify the Adam optimizer
adam = Adam(learning_rate=0.001)
model.compile(loss='binary_crossentropy', optimizer=adam, metrics=['accuracy'])

# Train your model
model.fit(x_train, y_train, epochs=10, batch_size=32, validation_data=(x_val, y_val))

In this example, we first import the necessary Keras modules, including the Adam optimizer from keras.optimizers . Then, we define our model architecture, which consists of a single hidden layer with 64 units and a final output layer with a sigmoid activation function.

Next, we compile the model and specify the Adam optimizer with a learning rate of 0.001. We also specify the binary cross-entropy loss function and accuracy metric.

Finally, we train the model using the fit method and specify the number of epochs, batch size, and validation data.

You can adjust the Adam optimiser’s learning rate and other hyperparameters for your problem.

2. PyTorch Adam optimizer

Here’s an example of how to use the Adam optimizer in PyTorch:

import torch
import torch.nn as nn
import torch.optim as optim

# Define your model architecture
class MyModel(nn.Module):
def __init__(self):
super(MyModel, self).__init__()
self.fc1 = nn.Linear(100, 64)
self.fc2 = nn.Linear(64, 1)
self.relu = nn.ReLU()
self.sigmoid = nn.Sigmoid()

def forward(self, x):
x = self.relu(self.fc1(x))
x = self.sigmoid(self.fc2(x))
return x

model = MyModel()

# Specify the Adam optimizer
adam = optim.Adam(model.parameters(), lr=0.001)

# Define the loss function
criterion = nn.BCELoss()

# Train your model
for epoch in range(num_epochs):
# Forward pass
outputs = model(inputs)
loss = criterion(outputs, labels)

# Backward and optimize
optimizer.zero_grad()
loss.backward()
optimizer.step()

In this example, we first import the necessary PyTorch modules, including the Adam optimizer from torch.optim . Then, we define our model architecture using the nn.Module class and specify the layers, activation functions, and input/output dimensions.

Next, we specify the Adam optimizer by passing the model parameters and learning rate as arguments to the optim.Adam constructor. We also define the binary cross-entropy loss function using the nn.BCELoss class.

Finally, we train the model by iterating over the dataset for several epochs. For each epoch, we perform a forward pass through the network, calculate the loss, perform backpropagation and optimization using the Adam optimizer, and update the model parameters.

You can adjust the Adam optimiser’s learning rate and other hyperparameters for your problem. Also, depending on your specific use case, you may need to adjust the training loop to include additional steps, such as validation and early stopping.

3. TensorFlow Adam optimizer

Here’s an example of how to use the Adam optimizer in TensorFlow:

import tensorflow as tf

# Define your model architecture
model = tf.keras.Sequential([
tf.keras.layers.Dense(64, activation='relu', input_dim=100),
tf.keras.layers.Dense(1, activation='sigmoid')
])

# Compile your model and specify the Adam optimizer
adam = tf.keras.optimizers.Adam(learning_rate=0.001)
model.compile(loss='binary_crossentropy', optimizer=adam, metrics=['accuracy'])

# Train your model
model.fit(x_train, y_train, epochs=10, batch_size=32, validation_data=(x_val, y_val))

In this example, we first import the necessary TensorFlow modules, including the Adam optimizer from tf.keras.optimizers . Then, we define our model architecture using the tf.keras.Sequential class and specify the layers, activation functions, and input/output dimensions.

Next, we compile the model and specify the Adam optimizer with a learning rate of 0.001. We also specify the binary cross-entropy loss function and accuracy metric.

Finally, we train the model using the fit method and specify the number of epochs, batch size, and validation data.

You can adjust the Adam optimiser’s learning rate and other hyperparameters for your problem. Also, depending on your specific use case, you may need to adjust the training loop to include additional steps, such as validation and early stopping.

Conclusion

The Adam optimizer is a powerful and widely used optimization algorithm for deep learning. It offers several advantages, including adaptive learning rates, faster convergence, and robustness to hyperparameters. However, it also has potential disadvantages, such as memory requirements and susceptibility to noise. Several alternative optimization algorithms, including SGD, Adagrad, RMSprop, Adadelta, and Nadam, can be used depending on the specific requirements of the deep learning application. It is important to carefully tune the hyperparameters and monitor the training process to ensure optimal performance, regardless of the optimization algorithm.

Originally published at https://spotintelligence.com on March 1, 2023.

--

--

Neri Van Otten

Machine learning engineer. 12+ years of experience. Natural Language Processing (NLP) expert. Founder of Spot Intelligence https://spotintelligence.com