Attending Google I/O 2019
I attended I/O at the Amphitheater of Mountain View this year and again it was an incredible experience! It was all about community and learning new Google technologies. As always, the best part about attending a conference in person is the people: the dinner parties, office hours, and sandbox demos which you won’t get from watching the videos online.
Community
There were so many parties the night before I/O! I went to 3 parties: dinner with the ML GDEs, Women Techmakers dinner and then hanging out with the Android community.
I played Ping-Pong at the Community Lounge with friends and it was so fun!
Wednesday afternoon I was invited to a Google Cloud VIP Reception where I hung out with ML GDEs, Cloud GDEs and the Google Cloud team.
I’m organizing the TensorFlow 2.0 Global Docs Sprints with Sergii, Paige, and ML GDEs / GDG organizers worldwide. We had an office hour during which Sergii, Paige, Jason, Billy, Jeongkyu, Vikram and I met to discuss planning for the events.
In the evening I attended an AI/ML meetup with Jeff Dean and the Google Brain team.
Learnings from I/O
First I filtered the I/O app to reserve or star the sessions that I‘m interested in. Then I wrote a draft schedule for each day: the sessions I absolutely want to attend in person, people to meet and sandbox/office hours to visit. That way if I missed a session I’d catch up right away by watching the videos. I took a lot of notes during sessions, and photos / tweets also helped me to remember.
This year I focused my learnings from I/O on these 3 areas: 1) on-device ML, 2) getting started with ML, and 3) advanced / future ML talks.
On-device ML
Super exciting are the many on-device ML talks this year! In the past, the ML talks tend to focus on tools or platforms for ML model training. Now we have many options to deploy these ML models to mobile and IoTs:
- ML on your devices: the options (video), by Laurence Moroney & Daniel Situnayake.
- What’s new with Android ML (video), by Dong Chen, Hoi Lam, Laurence Moroney and Matej Pfajar.
- AI for Mobile and IoT devices: TF Lite (video), by Tim Davis and Sara Sirajuddin.
- ML Kit: ML for Mobile with Firebase (video), by Shiyu Hu, Chrisitaan Prins, Sonakshi Watel, Ann Zimmer.
- Introducing Google Coral: Building On-Device AI (video), by Bill Luan.
- Intro to the CameraX Jetpack Library (video), by Vinit Modi, James Fung, Franklin Wu, Trevor McGuire. Camera is super important for computer vision because you will need to capture the images as input for your ML algorithms. CameraX will make app developers life much easier when implementing on-device ML.
Getting started with ML
I write blog posts, run workshops / study jams and create video courses on machine learning. So I’m always interested in the “getting started with ML” sessions because I’m curious to see how others teach the subject. In addition, there is always something new to learn, even in the beginner sessions!
If you are new to ML, definitely check out these sessions:
- ML Zero to Hero (video), by Karmel Allison & Laurence Moroney.
- Live Coding A ML Model from Scratch (video), by Sara Robinson.
- Getting Started with TensorFlow 2.0 (video), by Josh Gordon and Paige Bailey.
Advanced and Future ML
If you are already an expert in ML, check out these sessions:
- Swift for TensorFlow (video), by James Bradbury and Richard Wei.
- Federated Learning: Machine Learning on Decentralized Data (video), by Emily Glanz and Daniel Ramage.
- Cutting Edge TensorFlow: New Techniques (video), by Mike Bendensky, Sofien Bouaziz, Elie Bursztein and Josh Dillon.
- TensorFlow Extended: ML Pipelines and Model Understanding (video), by Kevin Haas , Tulsee Doshi and Konstantinos Katsiapis.
- Deep Learning to Solve Challenging Problems (video), by Jeff Dean.
Codelabs
Martin Gorner wrote 4 codelabs to help people get started with deep learning on TPU.
You can check out all these other I/O codelabs here.
Sandbox demos & Office hours
I visited Sandbox G for the AI/ML demos and office hours. My favorite one was the #DanceLike app demo. I got to learn how to dance and in the end received a video of me dancing. The app showcases the power of TensorFlow Lite and on device ML: with the latest in segmentation, pose and GPU techniques all on-device. It makes my moves look great, as if I can dance like a pro.
Watch this video to see all the cool ML & AL sandbox demos at I/O 2019.