TensorFlow session recordings from O’Reilly AI Conference San Francisco 2018

By Marcus Chang, Developer Relations Program Manager

O’Reilly AI Conference, San Francisco

In September, the TensorFlow team presented two days of TensorFlow-related talks at the O’Reilly AI Conference in San Francisco covering topics including TensorFlow Lite, TensorFlow.js, TFX (Extended) & Hub, Distributed TensorFlow, and many more! The session recordings are now available on our TensorFlow YouTube channel for your viewing.

View entire playlist here.

Sessions:

Building AI with TensorFlow: An Overview

TensorFlow is one of the world’s biggest open source projects, and it continues to grow in adoption and functionality. We shared major recent developments and highlighted some future directions, as well as discussed how you can become more involved in the TensorFlow community.

TensorFlow: Machine Learning for Programmers

In this talk, Laurence Moroney from Google talked about Machine Learning, AI, Deep Learning and more, and how they fit the programmers toolkit. He introduced what it’s all about, cutting through the hype, to show the opportunities that are available in Machine Learning. He also introduced TensorFlow, and how it’s a framework that’s designed to make Machine Learning easy and accessible, and how intelligent apps that use ML can run in a variety of places including mobile, web and IoT.

TensorFlow for JavaScript

TensorFlow.js is the recently-released JavaScript version of TensorFlow that runs in the browser and Node.js. In this talk, the team introduced the TensorFlow.js ML framework, and showed with demo on how to perform the complete machine-learning workflow, including the training, client-side deployment, and transfer learning.

Swift for TensorFlow

Swift for TensorFlow combines the flexibility of Eager Execution with the high performance of Graphs and Sessions. Behind the scenes, Swift analyzes your Tensor code and automatically builds graphs for you. Swift also catches type errors and shape mismatches before running your code, has the ability to import any Python library, and has language-integrated Automatic Differentiation. We believe that machine learning tools are so important that they deserve a first-class language and a compiler.

TensorFlow Lite

TensorFlow Lite is a lightweight machine-learning framework that can do inference on a variety of mobile and small devices (from mobile phones, Raspberry Pis and microcontrollers). It also provides a simple abstraction that allows you to access AI accelerators. The team talked about the basics of the frameworks, current status of development and the latest developments. In this session you’ll learn the concepts of how to prepare your model for mobile, and how to write code that executes it on a variety of different platforms.

TensorFlow Extended (TFX) & Hub

In this session the team introduced TensorFlow Extended (TFX), the end-to-end machine learning platform for TensorFlow that powers products across all of Alphabet. As machine learning is evolving from experimentation to serving production workloads, so does the need to effectively manage the end-to-end training and production workflow including model management, versioning, and serving.

Cloud TPUs

This talk takes you through a technical deep dive on Google’s Cloud TPUs accelerators, as well as how to program them. It also covers the programming abstractions that allow you to run your models on CPUs, GPUs, and Cloud TPU, from single devices up to entire Cloud TPU pods.

TensorFlow Autograph

TensorFlow AutoGraph automatically converts plain Python code into its TensorFlow equivalents, using source code transformation. Our approach is complementary with the new TensorFlow Eager project and will allow using the imperative style of Eager mode, while retaining the benefits of graph mode. By using automatic code conversion, developers can write code that’s more concise, efficient and robust.

TensorFlow Probability

TensorflowProbability (TFP) is a TF/Python library offering a modern take on both emerging & traditional probability/statistical tools. Statisticians/data scientists will find R-like capabilities which naturally leverage modern hardware. ML researchers/practitioners will find powerful building blocks for specifying and learning deep, probabilistic models. In this talk, we introduce core TFP abstractions and demo some of its modelling power and convenience.

Deep learning for fundamental sciences using high-performance computing

The fundamental sciences (including particle physics and cosmology) generate exabytes of data from complex instruments and analyze these to uncover the secrets of the universe. Deep learning is enabling the direct exploitation of higher-dimensional instrument data than previously possible, so improving the sensitivity for new discoveries. In this talk, our guest speaker Wahid Bhimji (NERSC) describes recent activity in this field, particularly that at NERSC, the mission supercomputing centre for US fundamental science, based at Berkeley National Lab. This work exploits and builds on Tensorflow to explore novel methods and applications; exploit high-performance computing scales; and provide productive deep learning environments for fundamental scientists.

Tensor2Tensor

Tensor2Tensor is library of deep learning models and datasets, which facilitates the creation of state-of-the art models for a wide variety of ML applications, such as translation, parsing, image captioning and more, enabling the exploration of various ideas much faster than previously possible.

Distributed TensorFlow

This talk demonstrates how to perform distributed TensorFlow training using the Keras high-level APIs. The team walks you through TensorFlow’s distributed architecture, how to set up a distributed cluster using Kubeflow and Kubernetes, and how to distribute models created in Keras.