Callbacks

The AI Guy
Nerd For Tech
Published in
4 min readMar 5, 2022

It’s been a long time since my last blog came out; I was stuck in some personal work and could not keep the chain going. But here I am back with an exciting topic for the Deep Learning chain, which we started. Let’s see what Callbacks are.

VIA News » Artificial Intelligence, Machine Learning, Deep Learning, and Computer Vision… What is the Difference?

Callbacks are usually functions used or executed in between the training of a Deep Learning Model.

As you guys read my previous articles, you know the basic steps while training a Deep Learning Model. Training can be tedious as the data size increases and could even take more time than usual. Usually, we provide all the parameters required while training the model beforehand and don’t pause or interrupt the training process. But this is not a generic method; we cannot train the whole model again if a particular parameter does not perform according to our required results. So, we use Callbacks to insert some parameters at different timing while training.

As the name suggests, Callbacks calls the particular function back and remodels the process according to the desired change. Isn’t that interesting and helpful? I find these functions super beneficial, and I am sure you will also.

It is straightforward to use these Callbacks; you need to call them while you fit the respective model. Also, don’t forget first to define the particular Callback. With Tensorflow, we get many valuable Callbacks. Let’s see each one in detail.

Types of Tensorflow Callbacks :

Early Stopping

As the name implies, we frequently utilize this to halt our training before it is completed. This can be used to break when a performance statistic no longer improves. For example, if you train a deep learning model and don’t see any improvement in the metrics you’re using, some hyperparameter tuning is likely required, and this callback will halt the training and save us time. Important arguments for this callback are:

  • monitor (quantity to be monitored).
  • min_delta (minimum change in the monitored amount to qualify as an improvement).
  • patience (number of epochs with no improvement after the training should be stopped).
  • baseline (baseline value for the monitored quantity, training should stop if there is no improvement in the monitored amount).
Early Stopping Callback

Learning Rate Scheduler

One of the most common jobs during the training process is to change the learning rates. We usually start lowering the learning rate as the model approaches the loss-minimization minimum (best fit) to ensure better convergence. We can schedule the natural learning rate using this method. Within training, we can automate the procedure and adjust the rates. When using this type, there is only one argument: schedular (a function that gives you a new learning rate by taking the current epoch and learning rate as input).

Learning rate Scheduler with a Scheduler function

Model Checkpoint

This specific kind is used to save the Model at various intervals.
Put another way; we can utilize this to save the model weights at any stage we desire, which can then be used in future computations.
We wouldn’t have to train the Model again because we’ll reuse the same model values if it worked well the first time. file_path (the whole path to save our Model), monitor (the metric that needs to be observed), and save_best_only(if True, holds the best Model, which is considered according to the monitored metric) are all critical inputs for this one.

Model Checkpoint Callback

Log CSV

The logCSV callback is another helpful callback; it allows us to save all logs after each epoch while training in a CSV file that may be used afterwards. This one is simple to use, with only three arguments: filename (the name of the CSV file where all logs should be saved), separator (the string used to separate elements in the CSV file), and append (used to check if the respective file exits or not, and overwriting the existing file).

LogCSV Callback

Custom Callback

Last but not least, the most useful, TensorFlow, provides the Custom Callback method, which allows us to design our callback for any specific purpose. For instance, while training on image data, we can create a callback that predicts the output after each epoch to see how our model is performing.

Custom Callback designed for the above example

It is not enough to call all these callbacks; we must also send a separate command to execute them while training. All training-related orders are delivered in the model.fit function; thus, we’ll need to call all of the specified callbacks again in that function for the callback one.

model.fit function after inserting callbacks

Conclusion

We discussed Callbacks and how they can play a vital role while training Deep Learning models. We further discussed some useful callbacks in detail and understood their working too. We also saw important arguments for each callback, and I hope now you know what callbacks are and how can we use them.

From now on, I will try to be regular with the blogs and soon start a new series, so stay connected for more and Happy Learning !!

--

--