TensorBoard Services

Imran us Salam
Red Buffer
Published in
11 min readDec 21, 2021

TensorBoard is used for many things, and each type of task uses a different service. We have different services for each type of task;

To perform various tasks, TensorBoard provides an API to call out each one. For example, if you want to visualize your network details, TensorBoard provides a Model Graphs API which enables the user to understand the details of the architecture used.

Each TensorBoard API has a different implementation, yet each uses the same structure. We will see how to use each API with the correct format and what the results of each API show. We will see how to call these APIs services of TensorBoard; we are going to look at some of them in detail.

We will study in detail what each service does and how we can implement them using TensorFlow/Keras.

The blog is presented in the following sections:

  • Using scalars and metrics
  • Using image data
  • Using model graphs
  • Using embedding projector
  • Exploring Machine Learning with TensorBoard

Using Scalars and Metrics

A major part of TensorBoard — and Machine Learning, in particular — is that it has a lot of metrics and scalars that either define how good your model is or tell you how well you’re optimizing.

If we divide them into scalars and metrics, we can see that metrics are used as measures to define how well your model is performing. Scalars are just variables that tell you the values of parameters or hyperparameters. Now, these values can be like a learning rate or regularization constant. This is particularly useful if you’re using variable learning rates. This can also help you visualize the overfitting and underfitting in your model.

Demonstrating scalars and metrics

In this example, we will see an example of Converting Celsius to Fahrenheit using Linear Regression and we will use TensorBoard to log the loss so we can visually see it. Let’s start coding and see how well this works.

Here we are importing the necessary packages to run our code. In the function celsius_to_fahrenheit, we write the mathematical formula to convert Celsius degrees to Fahrenheit degrees. Then we create the dataset using the formula and split our dataset into train and validation sets:

import tensorflow as tf
from datetime import datetime
from packaging import versionfrom tensorflow import kerasimport numpy as npdef celsius_to_fahrenheit(c):
return (c * (9/5)) + 32
celsius_points = list(range(-1000, 1000))fahrenheit_points = [celsius_to_fahrenheit(c) for c in celsius_points]val_features = celsius_points[:50] + celsius_points[-50:]val_labels = fahrenheit_points[:50] + fahrenheit_points[-50:]train_features = celsius_points[50:] + celsius_points[:-50]train_labels = fahrenheit_points[50:] + fahrenheit_points[:-50]

Next, we log the tensorboard callbacks to a directory accessible by time:

logdir = “logs/scalars/”
tensorboard_callback = keras.callbacks.TensorBoard(log_dir=logdir)

We then define our model using three layers:

model = keras.models.Sequential([
keras.layers.Dense(16, input_dim=1, activation=’relu’),
keras.layers.Dense(6, activation=’relu’),
keras.layers.Dense(1)
])
model.compile(
loss=’mse’, # keras.losses.mean_squared_error
optimizer=keras.optimizers.Adam(lr=1e-3),
)

We will compile our model with the necessary loss function and optimizer:

model.fit(
train_features, # input
train_labels, # output
batch_size=len(train_labels),
verbose=0, # Suppress chatty output; use Tensorboard instead
epochs=1000,
validation_data=(val_features, val_labels),
callbacks=[tensorboard_callback]
)

After running this example, we can open TensorBoard by writing this command.

tensorboard — logdir logs/scalars

This will give us a URL so we can navigate to it in any browser.

Metrics output displayed as a graph on TensorBoard

We can see that both the train and validation loss decreasing. This is an example of Metrics.

This simple example lets us see Loss convergence in our board. Let’s see how we can add custom scalars.

Adding a dynamic learning rate

Let’s add a dynamic learning rate.

We first log TensorBoard callbacks to a directory, accessible by time:

!rm -rf logs/scalars
logdir = “logs/scalars/”
file_writer = tf.summary.create_file_writer(logdir)
file_writer.set_as_default()

The tf.summary.scalar function scales the learning rate by the epochs. This function defines a scalar by the name of ‘learning rate’ and uses the data generated from this function. The steps are epochs.

def lr_schedule(epoch_number):
learning_rate = 0.002
if epoch_number > 100: learning_rate = 0.0002 tf.summary.scalar(‘learning rate’, data=learning_rate, step=epoch_number) return learning_rate

In this line, we are defining a learning rate scheduler that uses the lr_schedule function:

lr_callback = keras.callbacks.LearningRateScheduler(lr_schedule)

This line creates a TensorBoard callback using the directory we’ve defined:

tensorboard_callback = keras.callbacks.TensorBoard(log_dir=logdir)
model = keras.models.Sequential([
keras.layers.Dense(16, input_dim=1, activation=’relu’),keras.layers.Dense(6, activation=’relu’),keras.layers.Dense(1)])model.compile(loss=’mse’, # keras.losses.mean_squared_erroroptimizer=keras.optimizers.Adam(),)training_history = model.fit(train_features,train_labels,batch_size=len(train_labels),epochs=1000,validation_data=(val_features, val_labels),callbacks=[tensorboard_callback, lr_callback],)

After running this example, we can open TensorBoard by writing this command. This will give us a URL so we can navigate to it in any browser.:

tensorboard — logdir logs/scalars

The output is shown in the following imagescreenshot. Both Metrics as MSE of Train and Validation with a scalar Learning Rate being changed can be visualized.:

Two metrics are shown in TensorBoard

Next, we’ll look at using image data and find out how we can use it and what it is used for.

Using Image Data

Machine Learning’s biggest domain is Computer Vision and, to cater to that, TensorBoard has provided us with an image showing service. This service allows you to visualize your images in TensorBoard.

A big advantage TensorBoard gives you is that it lets you visualize images from your code. This is useful because while doing Computer Vision research we need to see images at each step. This lets us visualize the changes. This can also be used to see confusion metrics and heatmaps from your dataset’s analysis.

Being able to view your images after each epoch is valuable for gauging the progress made during each iteration of a generative model.

Let’s jump to an example where we can see how we use the TensorBoard Image data.

Demonstrating the Image image Data data Handling handling in TensorBoard

We are going to see an example of Fashion MNIST and see how we can view the images of Fashion MNIST using TensorBoard. Fashion MNIST is a dataset of 10 classes in total which has different clothing types like shirts, pants, sweaters, shoes, and so on.

We are going to import the necessary packages for our code:

import tensorflow as tf
from tensorflow import keras
import numpy as np

Download the mnist data from the keras dataset repository using keras.datasets. The data is already divided into train and test. The labels are integers representing classes.:

mnist = keras.datasets.mnist
(training_images, training_labels), (val_images, val_labels) = mnist.load_data()

Create a log directory for TensorBoard and create a file writer object which will be used to write files to TensorBoard:

logdir = “logs/images/”
tensorboard_callback = keras.callbacks.TensorBoard(log_dir=logdir)
file_writer = tf.summary.create_file_writer(logdir)

Normalize the images into 0–1 format:

training_images = training_images / 255.0
val_images = val_images / 255.0

With File Writer, use summary.image to export images to the TensorBoard. We are going to import 50 images from our training dataset and each image will have three dimensions: width, height, and channels. Since it’s a grey level image, we will have a shape of 28x28x1. The entire dataset will have the shape 50x28x28x1. :

images = np.reshape(training_images[:50], (-1, 28, 28, 1))
with file_writer.as_default():
images = np.reshape(training_images[:50], (-1, 28, 28, 1)) tf.summary.image(“Points”, images, max_outputs=10050, step=0)

After running this example, we can open TensorBoard by writing this command. This will give us a URL so we can navigate to it in any browser.:

tensorboard — logdir logs/images

The output is shown in the following imagescreenshot:

Images from Training Data visualized in TensorBoard

This was a relatively easy example; now, let’s make a custom callback where we have to change our image after a few iterations.

For this, we are going to visualize the model layer weights after each epoch and see if the weights make any sense to the human eye.

Visualizing Model model Weights

We are going to use the same dataset and the same method of normalization, but this time we are going to be using a custom callback. Specifically, we will be using LambdaCallback function to invoke our custom-made function which extracts the weights of the first layer and converts them to TensorBoard image shape.

We define our model. Make sure you name something to the layer you want visualized. I have named it “dense_first”. You can name it anything.:

model = tf.keras.Sequential([
tf.keras.layers.Flatten(input_shape=(28, 28)),
tf.keras.layers.Dense(128, activation=’relu’, name=’dense_first’),
tf.keras.layers.Dense(10)
])
model.compile(optimizer=’adam’,loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),metrics=[‘accuracy’])

Create a log directory for TensorBoard and create file writer object which will be used to write files to TensorBoard:

logdir = “logs/images/”
tensorboard_callback = keras.callbacks.TensorBoard(log_dir=logdir)
file_writer_custom = tf.summary.create_file_writer(logdir)

Now, let’s write a custom function that takes input the epoch and adds an image to the TensorBoard with File Writer using the summary.image object.

In our function, we are extracting the layer of the model, and from that layer we are extracting its weights. These weights are like a matrix. Hence, we add a batch dimension and a channel dimension because this is how TensorBoard takes image input.

With this we also have a LambdaCallback which allows us to write custom callback functions for TensorFlow/Keras.:

def log_model_weights(epoch):
with file_writer_custom.as_default():
print(“epoch_finished”)input_shape_flatten = 28 * 28 num_neurons = 128 weights = np.reshape(model.get_layer(‘dense_first’).get_weights()[0], (1, input_shape_flatten, num_neurons, 1)) tf.summary.image(“Weights of 1st Dense Layer”, ,weights np.reshape(model.get_layer(‘dense_first’).get_weights()[0], (1, 784,128,1)), step=epoch) custom_callback = keras.callbacks.LambdaCallback(on_epoch_end=log_model_weights)

Now we train the model, giving the TensorBoard callback to log loss and Accuracy and our custom callback to log the weights of first layer as an image.:

model.fit(
training_images,
training_labels,epochs=3000,callbacks=[tensorboard_callback, custom_callback],validation_data=(val_images, val_labels),)

After running this example, we can open TensorBoard by writing this command. This will give us a URL so we can navigate to it in any browser.:

tensorboard — logdir logs/images

We can see the Weights of our first layer.

The image doesn’t make sense, but this is how we can visualize the weights of a dense layer. And somehow these activations do makes sense when we multiply them with the input.

We’ve covered how we can add images to our TensorBoard using custom callback functions. Now, let’s move on to adding Model Graphs/Architecture to visualize them.

Using Model Graphs

The most important parts of building a model are its architecture and how it executes, how it forward passes, and how the loss backpropagates through the network. Hence a visual of the entire model is very important.

This is also useful if you have imported a model and want to understand its architecture by visualizing its layers.

We will see how we can check the graph of the network.

Demonstrating model graphs

For this demonstration, we are going to perform the same steps we performed in the Scalars and Metrics step for Celsius to Fahrenheit conversion example we used in the Using Scalars and Metrics section heading, except that we are going to change the path of the log.

Log the tensorboard callbacks to a directory accessible by time:

logdir = “logs/graph/”
tensorboard_callback = keras.callbacks.TensorBoard(log_dir=logdir)

After running this example, we can open TensorBoard by writing this command. This will give us a URL so we can navigate to it in any browser.:

tensorboard — logdir logs/graph

TensorBoard Graph Visualization

This figure shows us the model graph that we have created in TensorFlow. It shows each variable and each weight and each connection to each node.

We have seen how we can visualize the Model Architecture; we are now going to see how we can project embeddings on our TensorBoard.

Using Embedding Projector

Embeddings are a nice way to find out mathematical differences in your vocabulary. Sometimes your non-numerical data still have relationships that can be described using numbers. For example, the difference between the embeddings coffee and tea should be much smaller than the difference between coffee and horse.

Let’s move on to the demonstration of how we can use embeddings in TensorBoard.

Demonstrating embedding projectors

We will use the example of the IMDB dataset to generate embeddings and then see them in TensorBoard. This dataset contains movie reviews of 25,000 movies and is also labelled. These labels are based on sentiment: either good or bad.

Import the necessary packages to run our code:

import os
from tensorboard.plugins import projector
import tensorflow as tf
import tensorflow_datasets as tfds

Divide the dataset into Train and Validation:

(training_dataset, val_dataset), information = tfds.load(“imdb_reviews/subwords8k”, split=(tfds.Split.TRAIN, tfds.Split.TEST), as_supervised=True, with_info=True)

In this step, we are extracting the encoder from the information variable. Information variables contain information about text and label, we have extracted text’s encoder which is Sub Word Text Encoder.

encoder = information.features[“text”].encoder

Here, we are shuffling and batching dataset into batches of 10 with no padding.

training_batches = training_dataset.shuffle(1000).padded_batch(10, padded_shapes=((None,), ()))
val_batches = val_dataset.shuffle(1000).padded_batch(10, padded_shapes=((None,), ()))
next_train_batch, next_train_labels = next(iter(training_batches))

Create an embedding layer from Keras.Embeddings:

embedding_layer = tf.keras.layers.Embedding(encoder.vocab_size, 16)

Create an entire model with Embeddings as first layer and then dense layers. Use Accuracy as metric and compile with adam optimizer:

model = tf.keras.Sequential([embedding_layer,
tf.keras.layers.GlobalAveragePooling1D(),
tf.keras.layers.Dense(32, activation=”relu”),tf.keras.layers.Dense(16, activation=”relu”),tf.keras.layers.Dense(8, activation=”relu”),tf.keras.layers.Dense(1),])model.compile(optimizer=”adam”,metrics=[“accuracy”],loss=tf.keras.losses.BinaryCrossentropy(from_logits=True)

Train the model:

model.fit(
training_batches, epochs=2,
validation_data=val_batches,validation_steps=15)

Create the log directory:

log_dir=’/logs/embeddings/’
if not os.path.exists(log_dir):
os.makedirs(log_dir)

Create a meta file that contains the subwords and the unknown words:

f = open(os.path.join(log_dir, ‘meta.tsv’), “w”)
for sw in encoder.subwords:
f.write(sw + ‘\n’)
subwords_length = len(encoder.subwords)
for i in range(1, encoder.vocab_size — subwords_length):
f.write(‘unknown #’ + str(i) + ‘\n’)

Get the embedding weights and save them as a checkpoint:

embedding_weights = tf.Variable(model.layers[0].get_weights()[0][1:])
tf_checkpoint = tf.train.Checkpoint(embedding=embedding_weights)
tf_checkpoint.save(os.path.join(log_dir, “embeddings_checkpoint.ckpt”))

Create projector config and embeddings and project them using the projector library to the logs of TensorBoard:

projector_config = projector.ProjectorConfig()
projector_embedding = projector_config.embeddings.add()
projector_embedding.tensor_name = “embedding/.ATTRIBUTES/VARIABLE_VALUE”
projector_embedding.metadata_path = ‘meta.tsv’
projector.visualize_embeddings(log_dir, projector_config)

After running this example, we can open TensorBoard by writing this command. This will give us a URL so we can navigate to it in any browser.

tensorboard — logdir logs/embeddings

The following image shows the output.:

Embeddings being visualized in TensorBoard

We can see that the words that are similar to each other are closer in distance to each other.

We have now seen how we can add embeddings to the TensorBoard. A major reason to do that is to understand how good your embedding vector is.

We have finished the basic services with their examples. At this point, we should be able to write these basic services using TensorFlow/Keras. With this, let’s move on to some Machine Learning Examples where we have used these services.

Thank you

References

https://www.tensorflow.org/tensorboard/get_started

--

--