Google IO 2019 Tech Keynote in 5 minutes

Marco Bellinaso
6 min readMay 29, 2019

--

What’s new in Android, machine learning and TensorFlow, the web platform, Firebase, Google Assistant and Flutter

This article provides a quick bulleted list of all announcements made at the Google IO 2019 Tech Keynote, which you can watch in full here:

Every bullet point has a timestamp, so that if you want to watch the part of the video that talks about that, you just need to play the video above and go to that time. Or, rather than doing it manually, go to this article on Fastrecap.com where the timestamps are actual links that do that for you automatically.

Android

  • (8:00) Kotlin: more than 50% of pro developers are using it, and it’s the fastest growing language on GitHub. Many new JetPack APIs are going to be introduced first for Kotlin.
  • (9:20) Kotlin/Everywhere is a series of community-led events to learn the essentials of Kotlin in Android and Google Cloud Platform.
  • (10:05) JetPack, which has the capabilities of the Support Library and adds the Architecture Components and other libraries, is used by 80% of the top 1000 apps. Google is now introducing new libraries for it:
    • (10:50) Camera X: makes the access to camera consistent in Android versions all the way back to Lollipop. Makes code more concise, and has extensions to support HDR, Portrait, Nightmode and more.
    • (11:57) A couple of Architectural Components recently reached version 1.0: WorkManager (that simplifies background app scheduling) and Navigation (for easier creation and editing of in-app navigation). They are working on adding RxJava and Coroutines support in Room, deep integration of Coroutines in Lifecycle and LiveData. They are also working on new components, such as SavedState for ViewModel (for easier processing of data models upon app restarts) and a new benchmarking module (for easier performance testing)
  • (12:53) JetPack Compose is a new reactive UI programming library that will be developed publicly in AOSP.
  • (13:45) The Android Studio team stopped feature development for 6 months and focused on making it more stable. The beta 3.5, available now, includes more than 400 high priority bugs, like crashes and performance issues. Instant Run was rewritten and is now better.
  • (15:00) Android App Bundle is already in use by 80,000 apps, and they get an average size saving of about 20%. Dynamic features modules is now in beta, and allows to choose which features are downloaded.
  • (15:50) In-app Updates is now official, and allow apps to notify users about available updates and allows users to download them without leaving the app to go to the Play Store (demo at (16:20)).

Google Assistant

  • (20:22) Support for the HowTo markup allows to create how-to article with step by step instructions which are surfaced and rendered in a structured way in search results on both web and Assistant queries.
  • (21:40) A new HowTo template allows to quickly turn a YouTube video into an instructions tutorial: you fill up a spreadsheet with information for all the steps, upload that into the YouTube console, and the video will be presented in a new specialized way (demo at (22:01)).
  • (22:40) New App Actions: heath & fitness, finance, ride-sharing and food ordering (demo at (23:10)) for the START_EXERCISE action on the Nike training app).
  • (24:10) Developer preview of Interactive Canvas, a tool that allows developers to create full screen experiences for smart display devices, that leverage voice, visuals and touch. It’s based on html, css and javascript. Trivia game demo at (24:50).

Web platform

  • (26:25) Since last year, Chrome now loads a page in about half the time it took before. That is due to improvements on the V8 JavaScript engine, which is now 2x faster at parsing JS and used 20% less memory.
  • (27:00) Image Lazy Loading: add a loading=”lazy” attribute to your image tags and Chrome will take care of the lazy loading automatically. Network speed is taken into consideration, and the first 2KB of an image are used as soon as possible to show a placeholder, resulting in a smoother experience.
  • (28:29) Lighthouse (a tool that audits your website and provides recommendations to improve its speed and security) can now be integrated into your CI pipeline, so that every build is checked against limits (called “budgets”) that you set for things like images and asset sizes and page load time.
  • (30:48) Google Search is now using the latest version of Chromium to index the web, which means you can use the latest web platform features without worrying about having to implement hacks to let Google see your content.
  • (32:52) Web.dev is a new website that helps keeping up to date with new technologies and teaches best practices.
  • (33:27) ChromeOS got support for Linux (on selected devices) last year, which for example allows to run your website on a Linux container. Android Studio can be installed with a 1-click install. All Chromebooks launched this year will the Linux-ready out of the box.

Machine Learning and AI

  • (36:05) Last year Google announced ML Kit, which allows you to start quickly with common actions. It includes Vision (landmark detection, image labeling, barcode scanning and face detection), Natural Language (language identification and smart replies) and custom models. Some demos of apps that use it at (36:42). (37:05) New features, available now through Firebase:
    • On-Device Translation API for 59 languages.
    • Object Detection and Tracking API, which can be paired with Google Cloud Product Search API for retailers to match their own product SKUs.
  • (38:27) Auto ML allows devs to create models based on custom datasets without writing code. It was expanded with:
    • Tables, which allows to give tabular data in input and create a prediction models in days rather than weeks.
    • Video Intelligence automatically classifies videos with custom labels.
  • (39:44) Cloud TPU Pods beta: a Pod consists of an array of TPUs deployed in Google Cloud that you can use to run a large variety of ML applications (image recognition object detection, speech recognition, etc.) in an optimized and scalable way.
  • (41:00) TensorFlow 2.0 was recently released in alpha and a release candidate will arrive soon. It’s all about usability, with more intuitive APIs and less code needed.
    • TensorFlow.JS allows to build, train and deploy models in the browser and on the Node.JS platform.
    • TensorFlow Lite, which runs directly on iOS, Android and embedded systems, has been installed on more than 2B devices in the last 18 months. Demo at (42:45) which shows object tracking in real time (to teach you how to dance and rating your moves by comparing them to those of a dancer in a video played by the app) running on the iOS/Android GPU.

(46:20) Mobile Development (apps and web)

  • (46:58) AutoML Vision Edge simplifies the workflow to create a model to classify images that can run either on the cloud or on device. Demo at (47:46) of an app that identifies bog breeds.
  • (50:43) Firebase Performance Monitoring gives you insights into the startup time and network performances of your iOS and Android app, and Google announced it’s now available in beta for web as well, for free. You can see how long a page takes before it displays any content, or before it’s ready for interaction. You get charts showing the distribution of these metrics across countries and network connections.

Bonus, during the wrap-up

  • (54:50) Flutter, the open source toolkit for building iOS and Android apps from a single codebase, has now a technical preview of Flutter for Web. The New York Times used it for their puzzle app: demo at (55:33) showing the app running on iOS, Android, Mac, Windows and web.

Who am I and what do I do? I proudly work as a solutions architect in the Mobile Team @ ASOS.com (iOS app | Android app), and we’re always looking for skilled, friendly and talented developers that want to have an impact on how customers shop online. ASOS is the biggest online-only retailer in the UK and, let’s be real, the best tech and fashion company in the world. Some of the technologies we use are Swift for iOS, Kotlin for Android, React and Node on the web front-end, .NET and Azure on the back-end. If that sounds interesting to you and you happen to live in beautiful London (or are willing to move here — after all, it’s the best city in Europe, except for some in Italy!), do get in touch!

📝 Read this story later in Journal.

👩‍💻 Wake up every Sunday morning to the week’s most noteworthy stories in Tech waiting in your inbox. Read the Noteworthy in Tech newsletter.

--

--

Marco Bellinaso

Principal Architect @ASOS.com (and iOS / full-stack dev for fun)