Recap of the 2019 TensorFlow Dev Summit
Posted by Fred Alcober and Sandeep Gupta, on behalf of the TensorFlow team
TensorFlow held its third and biggest yet annual Developer Summit in Sunnyvale, CA on March 6 and 7, 2019. The event was attended by approximately 1,000 machine learning enthusiasts and watched over livestream by tens of thousands more.
The event was comprised of two days filled with technical updates from the TensorFlow team and presentations from users showcasing amazing applications. We also held hacker-rooms, breakout sessions, and workshops. Attendees interacted with the TensorFlow team, got their questions answered, and collaborated with each other.
In the three years since launch, TensorFlow has matured into an entire end-to-end machine learning (ML) ecosystem, helping to power the ML revolution we see happening around us. It has been downloaded over 41 million times and has over 1,800 contributors from around the world. TensorFlow as a comprehensive ML platform is helping a diverse set of practitioners, researchers, and new users to build amazing new solutions to solve challenging problems with AI. For example:
- students in Delhi built a mobile application for detecting air quality changes
- engineers at Twitter are surfacing the most relevant content for their users
- physicists in GE Healthcare are making MRI more reproducible and accurate
At the developer summit, we announced the alpha release of TensorFlow 2.0, which marks the beginning of the TensorFlow 2.0 era and brings TensorFlow’s powerful capabilities to more developers and researchers, making it easier than ever to build and use ML. We also announced new community partnerships and education initiatives with O’Reilly Media, Udacity, Deeplearning.ai on Coursera, and fast.ai to help foster open source collaboration and train the next generation of new users. Further, we listened to our community and completely updated developer documentation and redesigned tensorflow.org to make it even easier to get access to resources and information.
Below are the highlights and key announcements from the event.
Easy to use
In TensorFlow 2.0, the biggest focus has been on making the API simpler, more intuitive, and natural for all users. We are making the API components integrate better with tf.keras as the recommended high-level API for most users. This will enable developers to go from data ingestion, transformation, model building, training, and saving, to deployment much more easily. We launched TensorFlow Datasets, a collection of commonly used ML datasets prepared for easy use in TensorFlow.
Along with the TensorFlow 2.0 alpha release, we are providing conversion and migration tools and documentation to help with the transition from 1.x code to 2.0. These and many more resources, examples, and case-studies on TensorFlow can be accessed from the new tensorflow.org website.
Power to experiment and invent
TensorFlow is accelerating state-of-the-art research. This begins with flexibility for researchers to prototype their ideas quickly, try many experiments, and iterate. With new features in TensorFlow 2.0 such as eager execution by default, intuitive Python control flows, automatic optimization of eager code with tf.function, and greatly improved error messaging, we are enhancing the researcher development experience.
Large scale research in machine learning also requires massive parallel computing. Since last year, we have accelerated training on 8 V100s by more than 2x. Using a Cloud TPU v2, we’ve boosted performance by 1.6x, and with Intel MKL acceleration we have improved inference speed by more than 3 times. Getting great out of the box performance is a big focus of TensorFlow 2.0, and a core part of our progress towards final release.
The TensorFlow ecosystem includes a large collection of very powerful add-ons that expand TensorFlow in new and useful ways. Some of the add-ons that we described include:
- TensorFlow Federated: a library for federated learning to take advantage of decentralized data announced at the event
- TensorFlow Privacy: a library in development with tools to help train models with differential privacy
- TensorFlow Probability: library for using probabilistic methods in ML models for making predictions dealing with uncertainty and incorporating domain knowledge
- TensorFlow Agents: library for reinforcement learning in TensorFlow 2.0
- Advances in text and sequence processing such as support for Unicode text and the new RaggedTensor type for data with non-uniform shapes
- Mesh TensorFlow: a powerful library for researchers to build and train massively large-scale models using parallelism techniques
- Sonnet from DeepMind: an example of how research labs can build their own libraries on top of the modular and extensible framework of TensorFlow
Production on any platform with any language
TensorFlow for Production
TensorFlow Extended (TFX) brings the management of the entire machine learning lifecycle to our users. It has several component libraries and we announced new features in these components, but more significantly, the new orchestration support puts it all together to provide users with an integrated end-to-end platform (see example here). With support for your own orchestrator, the TFX components integrate with a metadata store. This store keeps track of all the component runs, the artifacts that went into them, and those that were produced. This enables advanced features like experiment tracking, model comparison, etc. that will greatly improve production use cases.
TensorFlow for Mobile & IoT
TensorFlow Lite, our solution for running models on mobile and embedded systems, has exploded globally and is now running on more than 2 billion mobile devices — leading the next generation of ML on device. It’s helping users solve use cases like predictive text generation, image classification, object detection, audio recognition, text to speech and speech to text recognition, video segmentation, and edge detection among many others.
At the event, we heard how global internet companies such as Alibaba XianYuin and Netease are using TensorFlow Lite to provide users with better application experiences. We also talked about TensorFlow Lite’s incredible ML performance and how it’s powering ML in marquee Google applications such as Search, Assistant, Photos and Pixel.
There have been major improvements to TensorFlow Lite’s general usability and model conversion features alongside an increased focus on optimization (i.e. quantization) and performance (i.e. GPU acceleration). We also showed how TensorFlow Lite is powering machine learning on the edge and IoT on platforms such as Coral TPUs and microcontroller boards (MCU).
Swift for TensorFlow
We also shared our progress with the new Swift for TensorFlow package. With version 0.2 just released, users can see increased usability and try out this novel ML paradigm. To make it easier to get started with Swift, we launched a new machine learning course using Swift for TensorFlow by fast.ai.
Commitment to community growth
The success of TensorFlow is in large part due to its amazing and growing community of users and developers. We have developed TensorFlow 2.0 in close engagement with the community, via an open RFC process, many new Special Interest Groups, and our Google Developer Expert community feedback and testing.
We have launched a new #PoweredByTF campaign and are discovering amazing new projects being built by our users. We announced the Google Summer of Code program where students can apply to come work with the TensorFlow engineering team for some hands-on development. We are also launching a new Powered by TF Challenge hosted on DevPost specifically for users to create and share their latest and greatest with TensorFlow 2.0.
Great educational resources are key to machine learning democratization and adoption. We announced two new educational resources to make it easier for beginners and learners to get started with TensorFlow. The first one is deeplearning.ai’s Course 1 — “Intro to TensorFlow for AI, ML and DL”, part of the TensorFlow: from Basics to Mastery series hosted on Coursera. The second one is Udacity’s Intro to TensorFlow for Deep Learning. Both courses are designed with developers in mind and require no prior machine learning experience and are available now.
And finally, we announced TensorFlow World, a new week-long conference dedicated to fostering open source collaboration and all things TensorFlow. This conference, co-presented by O’Reilly Media and TensorFlow, will be held in Santa Clara, CA on the week of October 28. Our vision is to bring together the amazing TensorFlow world and give an opportunity for folks to connect with each other. Call for proposals is open for attendees to submit interesting TensorFlow project papers or for companies to showcase their solutions using TensorFlow, and we can’t wait to see you there.
At Google, we believe AI research and applications will advance faster when all users have access to the best tools, allowing everyone to participate. TensorFlow is dedicated to helping empower all ML users. We’re committed to working with the community to make TensorFlow easy for everyone and to pursue AI for good!