Useful Functions For Tensorflow
Tensorflow is a great library for deep learning and has a lot of functionality to offer. Tensorflow provides many features which makes it a lot easier to develop Deep Neural Netwoks.
In this post we are going to see some useful predefined and user defined methods which might help you while working with tensorflow.
In Built Mehtods
1. tf.zeros_like
This function takes a tensor as an input and returns a tensor with zeros as value and same shape and type as input tensor.tensor = tf.constant([[1, 2, 3], [4, 5, 6]])
This function can be helpful in situation like creating a black image from a given input image.
tf.zeros_like(tensor) # [[0, 0, 0], [0, 0, 0]]
* You can use tf.zeros if you want to give the shape explicitly.
* If you want to initialize 1 instead of 0 you can use tf.ones_like.
2. tf.pad
Adds specified padding around a tensor with a constant value resulting in increased dimension of the tensor.t = tf.constant([[1, 2, 3], [4, 5, 6]])
It can be used to add border around an image.
paddings = tf.constant([[1, 1,], [2, 2]])
# 'constant_values' is 0.
# rank of 't' is 2.
tf.pad(t, paddings, "CONSTANT") # [[0, 0, 0, 0, 0, 0, 0],
# [0, 0, 1, 2, 3, 0, 0],
# [0, 0, 4, 5, 6, 0, 0],
# [0, 0, 0, 0, 0, 0, 0]]
3. tf.enable_eager_execution
This helps you in running the tensorflow codes as you execute them. Using eager execution you don’t need to build graph and run it in a session. You can read more about how to use eager execution here.
*Eager execution needs to be first statement after importing tensorflow.
User Defined Functions
Here are some functions that I use in my code. These functions certainly makes many things easier.
1. Visualize weights of a Convolution Layer
If you want to visualize how the convolution filters look like then you can use below function.
def plot_conv_weights(weights, input_channel = 0):
#Weights are the filter values which you need to pass for visualization
#Input Channel is the channel for which you want all the filters w = sess.run(weights)
w_min = np.min(w)
w_max = np.max(w)
num_filters = w.shape[3]
num_grids = math.ceil(math.sqrt(num_filters))
fig, axes = plt.subplots(num_grids, num_grids)
fig.subplots_adjust(hspace = 0.3, wspace= 0.3)
for i, ax in enumerate(axes.flat):
if i<num_filters:
img = w[:,:, input_channel, i]
ax.imshow(img, vmin=w_min, vmax=w_max, interpolation=’nearest’, cmap = ‘seismic’)
x_label = “Filter {0}”.format(i+1)
ax.set_title(x_label)
ax.set_xticks([])
ax.set_yticks([])
plt.show()
2. Getting the weights of a Convolution layer
To get the values of weights you can use below function just by passing the name of the layer.
def get_weights_variable(layer_name):
with tf.variable_scope(layer_name, reuse = True):
variable = tf.get_variable('kernel')
return variable
3. Get the output of a Convolution layer
If you want to see how the filters are getting activated or how does the output of a layer looks like.
def plot_conv_output(layer, image):
feed_dict = {x: [image]}
values = sess.run(layer, feed_dict)
# Number of filters used in the convolution layer.
num_filters = values.shape[3] # Number of grids to plot.
# Rounded-up, square-root of the number of filters.
num_grids = math.ceil(math.sqrt(num_filters)) # Create figure with a grid of sub-plots.
fig, axes = plt.subplots(num_grids, num_grids) # Plot the output images of all the filters.
for i, ax in enumerate(axes.flat):
# Only plot the images for valid filters.
if i<num_filters:
# Get the output image of using the i'th filter.
img = values[0, :, :, i] ax.imshow(img, interpolation='nearest', cmap='binary') ax.set_xticks([])
ax.set_yticks([]) plt.show()
These are the few functions that I thought might be useful while using tensorflow. If you use some functions which you think might be useful please put them in comment or mail me.