Test Drive TensorFlow 2.0 Alpha

TensorFlow
Mar 6 · 5 min read

Posted by and

TensorFlow 2.0 is focused on ease of use, with APIs for beginners and experts to create machine learning models. In recent articles like and , we introduced new features and the direction the platform is heading.

Today at the , we announced that you can try out an early preview of TensorFlow 2.0.

How to get started

The best way to start using TensorFlow 2.0 Alpha is to head to the . You can find early stage tutorials and guides for the alpha release at . Each tutorial you find in the Alpha docs will automatically download and install TensorFlow 2.0 Alpha, and there are more to come!

We recommend starting with the “Hello World” examples for and , then reading guides like .

  • The beginner uses the Keras Sequential API: the simplest way to start using TensorFlow 2.0.
  • The experts demonstrates how to write your forward pass imperatively, how to write a custom training loop using a GradientTape, and how to use tf.function to automatically compile your code (with just one line!).

There are a variety of new as well, including:

  • Importantly, a guide on (that enables you to get all the performance and portability of graphs, without ever having to write graph level code).
  • A guide on your code (with a conversion script that facilitates conversion of TensorFlow 1.x code into 2.0)
  • Additional early stage guides for .

There is also a revised , if you want to see what’s changed (now with many fewer symbols). Note, that while TensorFlow 2.0 is in active development, the 1.x docs will be the default landing page on tensorflow.org. If you are exploring the API reference, be sure you’ve selected the right TensorFlow version.

Installation

To install the Alpha release, we recommend creating a new virtual environment and using:

# CPU
pip install tensorflow==2.0.0-alpha0
# GPU
pip install tensorflow-gpu==2.0.0-alpha0

Importantly, we recommend seeing this on tensorflow.org for more details (note: it will be updated in the near future, this release is hot off the press!). You can also try the Alpha in Colab by using any of the new notebooks in the TensorFlow 2.0 Alpha section of the website.

Functions, not sessions

Let’s dig into how two features of 2.0 work together: Eager execution and @tf.function.

One of the biggest changes is that TensorFlow is eager-first, which means ops are run immediately upon calling them. In TensorFlow 1.x, you might be familiar with first constructing a graph and then executing pieces of the graph via tf.Session.run(). TensorFlow 2.0 radically simplifies TensorFlow usage — the same great ops, now much easier to understand and use.

a = tf.constant([1, 2])
b = tf.constant([3, 4])
print(a + b)
# returns: tf.Tensor([4 6], shape=(2,), dtype=int32)

TensorFlow 2.0 uses as a core developer experience. With 2.0, you can use Keras as you know it, building your models with the Sequential API, and then using compile and fit. All of from tensorflow.org work out of the box in 2.0.

Keras’s fit() works fine for many cases, however, those who need more flexibility have many more options. Let’s take a look at a custom training loop written in TensorFlow 2.0 style from :

This uses an Autograd-style GradientTape and applies your gradients manually via an optimizer. This can be useful for writing custom training loops with complicated inner workings like in reinforcement learning, or for research (making it easy to work on your new idea for a more efficient optimizer).

Eager execution is also useful just for debugging and monitoring your code as it runs, as you can use the Python debugger to inspect objects like variables, layers, and gradients. We’re using Python constructs like if, for, and print() within our training loop.

Once you get your code working the way you want, you will want to get graph optimization and efficiency. For this, you would wrap train with the decorator @tf.function. is built into tf.function, so you don’t need to do anything special to get the if or for clauses running with graph efficiency.

This code will work exactly the same as without the annotation, but be compiled into a graph that can run easily on GPU, TPU, or be saved into a SavedModel for later.

The especially fun part of this is that by wrapping train() in @tf.function, train_one_step(), compute_loss(), and compute_accuracy() are also automatically converted as well. You can also choose to wrap just part of your computations in @tf.function to get the behavior you want.

In addition Estimators are fully supported in TensorFlow 2.0. Check out the new for Boosted Trees and Model Understanding.

Testing and feedback welcome!

We would very much appreciate your feedback as you try out the latest version and upgrade your models! Please join the and you are welcome to attend our weekly (Tuesday, 2:00pm PT).

You will likely find bugs, performance issues, and more, and we encourage you to report them in our issue tracker tagged with . The most helpful thing you can do is include a complete, minimal example that exactly reproduces the bug.

More coming soon

To stay up to date on known issues and development work for TensorFlow 2.0, please refer to our . We are continuing to work on and improve TensorFlow 2.0, so you should see frequent upgrades to the nightly build package. To be clear, this is a developer preview release. We value your feedback!

Also, if you’ve built something awesome with TF 2.0 — from mobile applications to research projects to art installations — we would love to and highlight your work. Please let us know .

If you’ve developed recent examples you would like to share, please consider submitting a PR to have it added to the TensorFlow organization as part of .

TensorFlow

TensorFlow is an end-to-end open source platform for machine learning.

TensorFlow

Written by

TensorFlow is a fast, flexible, and scalable open-source machine learning library for research and production.

TensorFlow

TensorFlow is an end-to-end open source platform for machine learning.