Takeaways from the TensorFlow Dev Summit 2018

Forge.AI
Forge.AI — Fueling Machine Intelligence
3 min readMay 22, 2018

By Brandon Mckinzie

At Forge.AI, we develop capabilities for transforming unstructured streams of data into a structured format consumable by other AI systems. Tensorflow is one of the principal toolkits we use in developing and deploying our capabilities, such as hierarchical classification. This past March, I had the opportunity to attend TensorFlow Dev Summit 2018 at the Computer History Museum in Mountain View and represent the work we are doing at Forge.AI.

I was particularly excited to attend, because the summit is a place for the TensorFlow community to share their ideas and experience with the library with the TF dev teams themselves, and for the TF dev teams to present new and exciting releases in the TensorFlow ecosystem. I anticipated that I would learn about new approaches, tools, and ideas — and I did. Below are a few of my own key takeaways.

TensorFlow Extended

Implementing a machine learning model can often be the simplest and least time-consuming part of integrating it inside a production pipeline. Beyond implementation, there are various aspects of data management, preprocessing, postprocessing, serving, and more that must be in place for the model to be of any use (see Figure 1). TensorFlow was designed with the production environment in mind and it is arguably the most production-friendly choice for machine learning engineers today. The Dev Summit made it clear that this is still a high priority in their announcement of TensorFlow Extended (TFX), A TensorFlow-Based Production-Scale Machine Learning Platform. TFX aims to address the multitude of components involved in a production pipeline aside from just the machine learning models.

Figure 1: The many components involved in a production pipeline that TFX helps support, from the TFX talk by Clemens Mewald and Raz Mathias )

One notable component of TFX, TensorFlow Serving, is a great tool for the serving portion, and TensorFlow Extended greatly builds on/around it to incorporate the other components. I was particularly excited to hear the announcement that TensorFlow Serving will soon include support for a REST API. The current restriction to gRPC led to us developing custom server implementations at Forge, but the addition of REST support and TFX will allow us to re-evaluate this and possibly simplify our systems and maintenance.

TensorFlow Hub

Another great session focused on the release of TensorFlow Hub, a new library containing a number of reusable components, many of which come pre-trained on large datasets.For example, you can now access pre-trained word embeddings for many languages from their Text Modules, and you can do so in a couple lines of TensorFlow code. You can even download entire pretrained models from popular academic papers, allowing for quick experimentation and reproducibility.

Anyone can contribute to TensorFlow Hub and it could soon become an invaluable resource for machine learning practitioners, especially those that don’t have access to the enormous computational resources of companies like Google. It represents a step in the right direction for making ML accessible to everyone.

Swift for TensorFlow

Perhaps the most surprising session, which I highly recommend watching, was the announcement of Swift for TensorFlow (TFiwS). The primary goal of TFiwS is a maximally user-friendly first-class language and compiler for machine learning. Anyone who has used TensorFlow has experienced their program crashing due to some obscure runtime error that could only be found after the underlying C++ Session began executing computations. TFiwS can catch many of these common errors immediately after they are typed by the user, since Swift automatically builds the graph in the background. Thus, TensorFlow having its own custom compiler enables powerful capabilities that aren’t possible with the Python API. Although extremely promising, TFiwS is still an early stage research project, and is not recommended yet for general use by machine learning developers, as stated on the project’s GitHub repository.

I look forward to attending future sessions. For more information on how Forge.AI is using TensorFlow for our modeling needs, please feel free to contact me at brandon@forge.ai.

Note: This post was originally published on our blog: https://www.forge.ai/blog/takeaways-from-tensorflow-dev-summit-2018

--

--