Saving, Freezing, Optimizing for inference, Restoring of tensorflow models
Hi DL Lovers! Hope you enjoyed my last two articles.This is the last article of the TF_CNN trilogy. We have trained our model and now we want to save it for deployment.
How to save it?
Saving the model means saving all the values of the parameters and the graph.
After saving the model using saver.save(sess,'./tensorflowModel.ckpt').
There will be four files:-
1.tensorflowModel.ckpt.meta:
Tenosrflow stores the graph structure separately from the variable values. The file .ckpt.meta
contains the complete graph. It includes GraphDef, SaverDef, and so on.
2.tensorflowModel.ckpt.data-00000-of-00001:
This contains the values of variables(weights, biases, placeholders, gradients, hyper-parameters etc).
3.tensorflowModel.ckpt.index:
It is a table where Each key is the name of a tensor and its value is a serialized BundleEntryProto.
serialized BundleEntryProto holds metadata of the tensors. Metadata of a tensor may be like: which of the “data” files contains the content of a tensor, the offset into that file, checksum, some auxiliary data, etc.
4.checkpoint:
All checkpoint information, like model ckpt file
name and path
We also created a protocol buffer file .pbtxt
. It is human readable if you want to convert it to binary : as_text = False
tf.train.write_graph(sess.graph.as_graph_def(),‘.’,‘tensorflowModel.pbtxt’, as_text=True)
tensorflowModel.pbtxt
: This holds a network of nodes, each representing one operation, connected to each other as inputs and outputs. We will use it for freezing our graph. You can open this file and check if some nodes are missing for debugging purpose.
Difference between
.meta
files and.pbtxt
files, well you can say.pbtxt
are human readable whereas.meta
files are not. But if you keepas_text = false
it is no more human readable. Still they are different ..meta
files holds ,more than just the structure of the graph likeMetaInfoDef
,GraphDef
SaverDef
,CollectionDef
. Whereas.pbtxt
files holds only the structure of the graph.
More on metagraphs and .pbtxt files
Freezing the graph
why we need it?
When we need to keep all the values of the variables and the Graph in a single file we do it with freezing the graphs.
And for it we have to first import freeze_graph
:
from tensorflow.python.tools import freeze_graph
Optimizing for inference:
To Reduce the amount of computation needed when the network is used only for inferences we can remove some parts of a graph that are only needed for training. For example:
> Removing operations used only for training like checkpoint saving.
> Stripping out parts of the graph that are never reached.
> Removing debug operations like CheckNumerics
.
Restoring the model
In order to restore the model , first we need to import_meta_graph
So, this will create the graph then we can load the value of the parameters that we had trained on this graph.
Note: We must always create the graph first before we load the values of the parameters.
Using the restored model for prediction
Up till now we learned how to save and restore the model. Now let’s see how we can use the model for predictions after restoring the model.
If you don’t know input and output node of your model then you can find it using:
Now as we know our input and output tensor ,in our case they are:
input : inputTensor
and dropout_keep_prob
output : output/softmax
Then we can accordingly provide our testing data to input node and the result will be a dense matrix of shape : {number of test data X number of classes}
Next whichever class has the highest value for particular sample ,that sample belongs to the class with highest score.Then we compare with expected class and predicted class and generate the accuracy percentage.
Note: If you are dealing with TextData while changing words2vec you have to use same Id for test data that was given to the words while training.
Thank you! Do comment if you didn’t understand anything or want any further explanation.