The future is here: 5 things developers flipped for at Google I/O

ASHAR SHOAIB
CBC Digital Labs
Published in
5 min readAug 7, 2018

I had the chance to attend Google I/O for the first time this May, and it was awesome — packed with incredible presentations and mind-blowing demonstrations.

For those of you who don’t know, I/O is an annual developer conference where Google unveils new hardware and software, and announces updates for its existing products and services. As a Senior Android Developer on CBC’s Native Apps and Emerging Products team (a.k.a. Delta Force), having the chance to attend I/O was incredible. It gave me and my colleague Dwija the opportunity to interact with thousands of other developers, get hands-on experience with new technologies, and meet the Google engineers in person.

From left to right: Ashar Shoaib — Senior Developer CBC, Abhishek Sisodia — Mobile Developer Scotia Digital Factory, Dwija Patel — Senior Developer, Senior Software Engineer CBC, Mausam Doshi — Android at Walmart eCommerce. (Ashar Shoaib/CBC)

Here are a few things from the conference that really caught my attention:

1. Google Duplex: The future is finally here!

The cool factor: I’ll finally have my own personal secretary.

Why I’m pumped: AI is further along than I thought.

Artificial intelligence will be able to drive your car! It will take away your job! It’ll tell you exactly what you’re craving! There’s a lot of noise about AI but nothing prepared me for Google Duplex — an experimental AI voice system that runs as an extension to Google Assistant.

When the first “Umm” and “Mmmhmm” were uttered by the AI, I knew we were in for a treat. Duplex had a natural conversation with a non-native English speaker who was trying to book a reservation at a restaurant. It was remarkable.

Some critics have suggested the realistic flow raises some ethical concerns. Should people be told that they’re not talking to a real human? Where do we draw the line? Important considerations for developers to ponder as we move deeper into this exciting new field.

2. John Legend to give Google Assistant all of him

The cool factor: John Legend will wake me up in the morning!

Why I’m pumped: I can finally speak to Assistant in my native tongue.

Google Assistant is going to be available in 30 different languages in 80 different countries, but that’s just the beginning. Assistant made six new voices available, including singer John Legend.

But wait, it gets even better. Continued Conversation is making Google Assistant more natural so users no longer need to use triggers like, “Hey, Google” before every command.

3. Android 9 Pie — An Operating System tailored to your habits

The cool factor: Android Studio navigation editor = less coding & more fun.

Why I’m pumped: The new jetpack library will significantly change how I architect projects.

Integrating AI at it’s core, Android 9 Pie (formerly known as Android P) leverages machine learning to improve battery usage with two new features, Adaptive Battery and Adaptive Brightness, that optimize device battery based on user behaviour. Another tool, Dashboard, is designed to give users a sense of digital well-being by providing a breakdown of their daily usage.

Google also released two new Android dev tools that I’m excited to use: Slices and Jetpack. With Slices, developers can expose widget-like components of their apps in Google Search Bar, so users can get content outside the native app experience.

This is a photoshop mockup of how I envision Android Slices being integrated with our current apps to expose the CBC app content in the Google search bar. (Ashar Shoaib/CBC)

Jetpack is a set of new libraries, tools and architecture guide for Android developers. It has a few cool features I’m excited for: binding data using observables, easier life cycle management, an in-app navigation tool and a work manager for background jobs.

4. Looking for that perfect lamp? Google Lens can help

The cool factor: A wise fox will show me the way with the new Google Map AR Streetview!

Why I’m pumped: Google Maps will now use my phone camera to tell me about my surroundings.

Lens, announced in 2017, is an AI-powered image recognition app. Google Maps is going to integrate Google Lens and leverage its computer vision technology. This new feature will help users navigate by identifying notable landmarks through an Augmented Reality view.

This cute animated guide will help you get where you want to go. (Google)

Lens is also launching two new features:

Smart Text Selection — Now you can use your camera to copy-paste text from printed text and images.

An example of how Google uses Lens to select print text. (Ashar Shoaib/CBC)

Style Match — If you’re looking for a lamp you saw at a friends place, you can now use your camera to find and recognize the items in the image.

An example of how Google Lens works. (Google)

5. Smart Compose — For those who hate writing emails

The cool factor: Writing your own emails is a thing of the past.

Why I’m pumped: I’ll never mess up an email ever again!

Gmail introduced Smart Compose, a new tool that utilizes machine learning to suggest phrases as you type. It speaks for itself (literally).

An example of how Smart Compose suggests text for you. (Ashar Shoaib/CBC)

So, what’s next?

With such a large pool of amazing tools to choose from, I’m excited to tinker with some of these new technologies. Though not yet available for users, I’ve begun work on integrating Android Slices for our CBC news app and also look forward to integrating Google Actions and Assistant in the near future.

I hope this article is a starting point for developers (and anyone interested) to share ideas about how we can adapt and use these new technologies to better serve our audience. Let’s build cool things together!

Check out some more pictures from my trip to I/O here: https://photos.app.goo.gl/R5Ay7ahtSx5oUenN9

--

--