TensorFlow 2.0 is focused on ease of use, with APIs for beginners and experts to create machine learning models. In recent articles like What’s coming in TensorFlow 2.0 and Standardizing on Keras, we introduced new features and the direction the platform is heading.
Today at the TensorFlow Developer Summit, we announced that you can try out an early preview of TensorFlow 2.0.
How to get started
The best way to start using TensorFlow 2.0 Alpha is to head to the new TensorFlow website. You can find early stage tutorials and guides for the alpha release at tensorflow.org/alpha. Each tutorial you find in the Alpha docs will automatically download and install TensorFlow 2.0 Alpha, and there are more to come!
- The beginner example uses the Keras Sequential API: the simplest way to start using TensorFlow 2.0.
- The experts example demonstrates how to write your forward pass imperatively, how to write a custom training loop using a GradientTape, and how to use tf.function to automatically compile your code (with just one line!).
There are a variety of new guides as well, including:
- Importantly, a guide on AutoGraph (that enables you to get all the performance and portability of graphs, without ever having to write graph level code).
- A guide on upgrading your code (with a conversion script that facilitates conversion of TensorFlow 1.x code into 2.0)
- Additional early stage guides for Keras.
There is also a revised API reference, if you want to see what’s changed (now with many fewer symbols). Note, that while TensorFlow 2.0 is in active development, the 1.x docs will be the default landing page on tensorflow.org. If you are exploring the API reference, be sure you’ve selected the right TensorFlow version.
To install the Alpha release, we recommend creating a new virtual environment and using:
# CPU pip install tensorflow==2.0.0-alpha0 # GPU pip install tensorflow-gpu==2.0.0-alpha0
Importantly, we recommend seeing this installation guide on tensorflow.org for more details (note: it will be updated in the near future, this release is hot off the press!). You can also try the Alpha in Colab by using any of the new notebooks in the TensorFlow 2.0 Alpha section of the website.
Functions, not sessions
Let’s dig into how two features of 2.0 work together: Eager execution and
One of the biggest changes is that TensorFlow is eager-first, which means ops are run immediately upon calling them. In TensorFlow 1.x, you might be familiar with first constructing a graph and then executing pieces of the graph via
tf.Session.run(). TensorFlow 2.0 radically simplifies TensorFlow usage — the same great ops, now much easier to understand and use.
a = tf.constant([1, 2]) b = tf.constant([3, 4])print(a + b) # returns: tf.Tensor([4 6], shape=(2,), dtype=int32)
TensorFlow 2.0 uses Keras as a core developer experience. With 2.0, you can use Keras as you know it, building your models with the Sequential API, and then using
fit. All of these familiar
tf.keras examples from tensorflow.org work out of the box in 2.0.
fit() works fine for many cases, however, those who need more flexibility have many more options. Let’s take a look at a custom training loop written in TensorFlow 2.0 style from this example:
This uses an Autograd-style GradientTape and applies your gradients manually via an optimizer. This can be useful for writing custom training loops with complicated inner workings like in reinforcement learning, or for research (making it easy to work on your new idea for a more efficient optimizer).
Eager execution is also useful just for debugging and monitoring your code as it runs, as you can use the Python debugger to inspect objects like variables, layers, and gradients. We’re using Python constructs like
print() within our training loop.
Once you get your code working the way you want, you will want to get graph optimization and efficiency. For this, you would wrap
train with the decorator
@tf.function. Autograph is built into
tf.function, so you don’t need to do anything special to get the
for clauses running with graph efficiency.
This code will work exactly the same as without the annotation, but be compiled into a graph that can run easily on GPU, TPU, or be saved into a
SavedModel for later.
The especially fun part of this is that by wrapping
compute_accuracy() are also automatically converted as well. You can also choose to wrap just part of your computations in
@tf.function to get the behavior you want.
In addition Estimators are fully supported in TensorFlow 2.0. Check out the new tutorials for Boosted Trees and Model Understanding.
Testing and feedback welcome!
We would very much appreciate your feedback as you try out the latest version and upgrade your models! Please join the testing@ TensorFlow user group and you are welcome to attend our weekly TF 2.0 support stand-ups (Tuesday, 2:00pm PT).
You will likely find bugs, performance issues, and more, and we encourage you to report them in our issue tracker tagged with 2.0 label. The most helpful thing you can do is include a complete, minimal example that exactly reproduces the bug.
More coming soon
To stay up to date on known issues and development work for TensorFlow 2.0, please refer to our TF 2.0 Project Tracker on Github. We are continuing to work on and improve TensorFlow 2.0, so you should see frequent upgrades to the nightly build package. To be clear, this is a developer preview release. We value your feedback!
Also, if you’ve built something awesome with TF 2.0 — from mobile applications to research projects to art installations — we would love to hear about it and highlight your work. Please let us know here.
If you’ve developed recent examples you would like to share, please consider submitting a PR to have it added to the TensorFlow organization as part of tensorflow/examples/community.