Eager Execution in TensorFlow : A more Pythonic way of building models

Suransh Chopra
Coding Blocks
Published in
3 min readSep 6, 2018

Created by the Google Brain team, TensorFlow is a popular open source library for numerical computation and large-scale machine learning. While building TensorFlow, Google Engineers opted for a static computational graph approach for building machine learning models i.e. In TF you define graph statically before a model can run, all communication with outer world is performed via tf.Session object and tf.Placeholder which are tensors that will be substituted by external data at runtime.

On the other hand, another popular machine learning library “PyTorch”, developed and maintained by Facebook, uses a dynamic computational graph approach. In PyTorch things are way more imperative and dynamic: you can define, change and execute nodes as you go, no special session interfaces or placeholders. With performance almost comparable to TF and having an intuitive/easy to learn API, PyTorch quickly became famous among the research community.

Google realised this, and with v1.7 launched the feature of “Eager Execution” in TensorFlow.

Enter Eager Execution

What is Eager Execution?

“A NumPy-like library for numerical computation with support for GPU acceleration and automatic differentiation, and a flexible platform for machine learning research and experimentation.”

Features of Eager Execution?

  1. It is compatible with native Python debugging tools
  2. Error logging is immediate
  3. Native Python control flow i.e loops and recursions
  4. Eager execution simplifies your code
  5. Back propagation is built in to eager execution
EE Disabled( Left ) , EE Enabled( Right )

With results being evaluated on the go without creating a Session, debugging code becomes a whole lot easier. Python debugging tools can be used easily which was previously not possible.

Building a Model

  1. Importing stuff and enabling Eager Execution

2. Designing a model & loss function

Notice the use of tfe.Variable instead of tf.Variable. The “tf.contrib.eager" module contains symbols available to both eager and graph execution environments and is useful for writing code to work with graphs

3. Training loop

Here tf.GradientTape records all the operations in the forward pass to apply gradient later. tape.gradient function return the derivative of loss with respect to weight and bias. Then passing this to optimizer.apply_gradients completes the process of applying gradient descent.

Conclusion

This execution mode in TF makes prototyping a lot easier. It’s probably going to be the preferred starting mode for anyone building new computations in TF.

Thanks for reading! If you enjoyed this story, please click the 👏 button and share to help others find it! Feel free to leave a comment 💬 below.

--

--