How to visualize feature vectors with sprites and TensorFlow’s TensorBoard

Andrew B. Martin
Looka Engineering
Published in
3 min readJun 26, 2019
t-SNE of XO data

Introduction

This post will show you how to take a set of images and image vectors, and prepare them for visualization in TensorFlow’s TensorBoard.

Imagine I’ve been working with an image dataset (doesn’t require much stretch of the imagination…). Maybe using it for classification, maybe using it for training a generative model, or maybe using it predict daily rainfall in Antigonish, Nova Scotia. For this post it doesn’t matter much what I’m using the images for. For this post what matters is I’ve passed my images through a model that maps them to an embedding space and now I have an embedded (aka vector) representations of my images.

Now, with these image vectors I want to get a sense of what the embedding space looks like. There are a number of ways to go about doing this, but one quick way I go about this is to use TensorFlow’s TensorBoard. It gives me beautiful visualizations and I can run 2D/3D PCA and t-SNE algorithms on my image vectors directly in my browser. The demo version of TensorBoard even has UMAP 😮. TensorBoard comes with TensorFlow so you can install it by following the TensorFlow installation guide.

TensorBoard example showing UMAP: http://projector.tensorflow.org/

When I first tried to do this, I realized there aren’t a lot of good resources to guide someone in how to prepare images for TensorBoard. There’s a github issue, and another github issue saying there’s no code example. Even though the second link has some resources now, they’re still not easy to access. I hope this post gives a good straightforward walk through of how to visualize image feature vectors in TensorBoard.

In the spirit of straightforward you can find a jupyter notebook here with the code in case you don’t care about a walk through.

The code

In the next few paragraphs I’ll describe how to go from images and image vectors to a representation that can be visualized in TensorBoard. For this to work, the model I’m getting the image vectors from doesn’t have to be in TensorFlow. It can be PyTorch, Chainer, whatever, as long as the vectors can be saved as numpy arrays. I’m going to first create a sprite image for TensorBoard, and then save my image vectors as TensorFlow variables so that TensorBoard can read them and associate them with the sprite image.

Create the sprite image with the following function:

Looking at the code, the first block adjusts all images to be 3 channel RGB images. The second block adds padding to the images. A sprite image should be a square so we compute the padding we need to arrange our images into a square. The third block of code tiles the individual images into the square sprite image.

Using this function in an example, it might look something like:

Now I’ll load the feature vectors, generate metadata, and save them for loading into TensorBoard:

The code should be easy to follow. I create a metadata file, then add an embedding to the TensorBoard config, and attach the metadata and sprite to the embedding.

Visualizations

Then I’ll fire up TensorBoard and watch the magic happen.

$ tensorboard --logdir my-log-dir

2D PCA
3D PCA

Conclusion

Thanks for reading. I’ve shown you how to take feature vectors and associated images, and prepare them for visualization in TensorBoard. I hope you’ve enjoyed the post. You can find a working example (along with data!) on github. Don’t hesitate to reach out with any questions.

You can follow me for more content on Twitter @andrewbrownmart

--

--