Putting your idea from paper to production
TensorFlow (TF) has been introduced to the community in 2015. Since then, it has become one of the most popular deep learning libraries. It can be used by programmers, data scientists, and researchers to create models out of the box by either using APIs or by writing things from scratch.
Some concepts behind it, like the constructions of graph operations, sessions, etc., can be tricky sometimes. For that reason, for the past years, Google has been trying to have it update in a way to turn TensorFlow more user-friendly. In 2019, the wait has finished and TensorFlow 2.0 was released, and with it some big news! In this post, I’ll try to resume some of the biggest changes as well as explain the new best practices while using it!
TensorFlow 2 has become more developer-friendly and easy to use. Keras is now the recommended higher-level API and all of the optimizers, metrics, losses, and layers have been unified under tf.keras. It offers a cleaner way to work. Debugging has become easier because of the eager execution of tensors. The introduction to tf.function decorator allows normal python functions to be optimized for performance by TensorFlow. The API endpoints have also been cleaned up for consistency across TensorFlow.
TensorFlow 2 allows code for layers, loss functions, training loops, and more. It has also taken scalability and deployment into account. Overall, TensorFlow has grown up to be a developer-friendly yet powerful deep learning library.
Finally, eager by default
TensorFlow 2 has enabled eager execution by default. In the earlier versions of TensorFlow, a session had to be created even for the simplest of tasks like printing a tensor.
Now, we’ve Eager Execution by default, the execution pattern, graphs construction and sessions are much more intuitive and “pythonic”.
NumPy is a popular Python library focussing on mathematical operations involving matrices. TensorFlow 2 feels a lot more like NumPy owing to eager execution. Also the ‘NumPy’ method on tensors allows tensors to be converted into NumPy arrays.
The decorator (tf.function)
This decorator speeds up the function’s execution on subsequent calls. It allows us to use the performance benefits of the underlying graph model used by TensorFlow. The decorator tells TensorFlow to trace the computations of the function and boost the performance.
TensorFlow 1.X was suffering from an enormous amount of duplicated code, particularly, in the contrib module, which was kept by the community. It was common to see the same exact activation function, for example, implemented in several different places.
Now the TensorFlow API has been cleaned, and a lot of those API’s removed while others replaced by their equivalent version in TF 2.X (
tf.keras.optimizer). For a detailed list of all then changes refer to TensorFlow namespaces.
TensorFlow 2 now recommends using Keras as its official API. Designing models in TensorFlow is now as simple as plugging in built-in layers to writing everything from scratch for research applications.
The sequential API is used for making linear models with a stack of layers one on the top of the other. Different layers such as Dense, Flatten, and Conv2D are added to the model using the add() method. Once you have created your model you can see the model’s summary. Sequential API is the most commonly used API for creating models. The basic idea is to stack a bunch of layers to create a model that is later compiled and further trained to draw inference.
The Sequential API builds a linear model however the Functional API is used to build a non-linear model. The architecture of these models resembles a directed graph (DAG). These models have the capability of taking multiple inputs, producing multiple outputs, and sharing layers.
The subclassing allows us to write everything from scratch. It gives finer control over aspects of creating a model. The framework gives a class in which the constructor defines the layers to be used and the call method puts together these layers in an order. This approach is recommended if you want to understand how things work under the hood.
Using the Fit method
Fit is the most commonly used method used for training a model. It works for model created using by any of the above-discussed strategies. It is used when datasets are small and fit into memory.
While using the fit method, users can define which loss function to be used, an optimizer and the metrics to be observed during the models training.
Various graphs are used to monitor the progress of a model during training. Alternatively, callbacks can be used to monitor the state of various evaluation metrics and make decisions such as to stop the training, save model periodically, schedule learning rate, visualize the training progress, and more. They are called at different points during training such as start of an epoch, end of a batch, etc. Callbacks are passed as a list to the fit method.
Using Train on Batch
The train on batch works at batch level. It takes a single batch as input and updates the model parameters after performing backpropagation. It allows us to do additional things between each batch. A use case would be to update a pre-trained model against a single new batch of samples that you might have collected later. It is also used in LSTMs where the state of an LSTM is needed to be reset usually after each series of data.
Writing Custom Training Loops
The fit function abstract a lot of details while training a model. However, if needed TensorFlow 2 allows you to look at the gradients of the network using ‘GradientTape’. You can also see how the weights get updated using those gradients. In short, it gives you finer control on how to update the weight. A detailed example of a custom training loop is available here.
Scalability is crucial when it comes to training deep learning models. TensorFlow 2 allows us to scale up the learning task for faster learning without making significant changes to the code. The training process can be extended to multiple GPUs, TPUs, and machines. Multiple strategies have been defined to meet different use cases, that we’ve covered here.
Once a model has been trained it needs to save in order to be used in production. In TensorFlow 2 saving the model has been standardized using SavedModel format which saves the complete model along with its weights. The SavedModel formal makes it easier for deploying models using TensorFlowJS, TensorFlowLite, or TensorFlow Serving.
Using TensorFlow Serving
TensorFlow serving allows machine learning models to be served as REST or gRPC (Remote Procedure Call) client APIs. The serving is made easier in TensorFlow by providing a docker file. Extensive documentation is available in the official docs here.
Some best practices you should keep in mind
Refactor your code as small functions
TensorFlow 1.X followed the programming concept where every single calculation was previously loaded and only calculated for selected tensors via
In TensorFlow 2.X, the paradigm is a bit different — it's expected that your code is written in small functions that will be called whenever required. In general, this doesn’t mean necessarily that the functions will be “memorized” if using the
tf.function command. Bear in mind that
tf.function should be used only for high-level calculations, for example, for model training or a network feedforward step.
Leverage Keras and models for variables management
Keras is a pretty straightforward tool to be used for model training and that comes mainly from the fact that is Keras who manages the available trainable_variables, and not yourself as it happens in TensorFlow.
Combine tf.data.Datasets and @tf.function
tf.data.Dataset is the best way to iterate over training datasets that are not possible to fit in memory. Datasets are iterables (and not iterators) that work as any other Python iterables while in Eager mode. You can benefit the prefetching/streaming asynchronous resources from using
tf.function(), that replaces the Python interactions in graph mode. For example:
Keras.fit() , you don’t need to worry about the dataset iterations:
Compatibility and Continuity
Now you are probably wondering “I’ve already developed my models with TensorFlow 1.X, how can I ensure compatibility?”
To ease the life of TensorFlow’s users, a code conversion framework was made available, this framework updates your TF1.X code into TF2.0 in a seamless manner. Although it’s not possible to convert every single line of code, it can be really helpful.
You can check more details here.
TensorFlow 2.X undoubtedly introduced a more user-friendly interface, while keeping some of the benefits of graph computation. Besides that, has also taken into consideration both scalability, data science usability, and deployment into account. Goodbye to tf, hello tf.keras and thank god we’ve EagerExecution.
Through the course of next blogposts, we will cover some other frameworks that TensorFlow 2.X has brought to the development community such as: TensorFlow Extended (TFX), TensorFlow Privacy and TensorFlow Probability.
In case you want to learn more about TF 2.X, I’ll leave you some recommendations: