Eager Execution in TensorFlow : A more Pythonic way of building models

Suransh Chopra
Sep 6, 2018 · 3 min read
Image for post
Image for post

Created by the Google Brain team, TensorFlow is a popular open source library for numerical computation and large-scale machine learning. While building TensorFlow, Google Engineers opted for a static computational graph approach for building machine learning models i.e. In TF you define graph statically before a model can run, all communication with outer world is performed via tf.Session object and tf.Placeholder which are tensors that will be substituted by external data at runtime.

On the other hand, another popular machine learning library “PyTorch”, developed and maintained by Facebook, uses a dynamic computational graph approach. In PyTorch things are way more imperative and dynamic: you can define, change and execute nodes as you go, no special session interfaces or placeholders. With performance almost comparable to TF and having an intuitive/easy to learn API, PyTorch quickly became famous among the research community.

Google realised this, and with v1.7 launched the feature of “Eager Execution” in TensorFlow.

Enter Eager Execution

What is Eager Execution?

“A NumPy-like library for numerical computation with support for GPU acceleration and automatic differentiation, and a flexible platform for machine learning research and experimentation.”

Features of Eager Execution?

  1. It is compatible with native Python debugging tools
  2. Error logging is immediate
  3. Native Python control flow i.e loops and recursions
  4. Eager execution simplifies your code
  5. Back propagation is built in to eager execution
Image for post
Image for post
Image for post
Image for post
EE Disabled( Left ) , EE Enabled( Right )

With results being evaluated on the go without creating a Session, debugging code becomes a whole lot easier. Python debugging tools can be used easily which was previously not possible.

Building a Model

  1. Importing stuff and enabling Eager Execution

2. Designing a model & loss function

Notice the use of tfe.Variable instead of tf.Variable. The “tf.contrib.eager" module contains symbols available to both eager and graph execution environments and is useful for writing code to work with graphs

3. Training loop

Here tf.GradientTape records all the operations in the forward pass to apply gradient later. tape.gradient function return the derivative of loss with respect to weight and bias. Then passing this to optimizer.apply_gradients completes the process of applying gradient descent.

Conclusion

This execution mode in TF makes prototyping a lot easier. It’s probably going to be the preferred starting mode for anyone building new computations in TF.

Thanks for reading! If you enjoyed this story, please click the 👏 button and share to help others find it! Feel free to leave a comment 💬 below.

Coding Blocks

Daily Tidbits on Android, Javascript and Machine Learning

Medium is an open platform where 170 million readers come to find insightful and dynamic thinking. Here, expert and undiscovered voices alike dive into the heart of any topic and bring new ideas to the surface. Learn more

Follow the writers, publications, and topics that matter to you, and you’ll see them on your homepage and in your inbox. Explore

If you have a story to tell, knowledge to share, or a perspective to offer — welcome home. It’s easy and free to post your thinking on any topic. Write on Medium

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store