Google I/O 2018 — A Summary

Charles Samuel D'Monte
6 min readMay 9, 2018

--

It’s finally here, and the keynote last night gave us a sneak peek at everything Google has up its sleeve, at least for the next one year.

As always, Google has proudly showcased how it is trying to use Artificial Intelligence to the full.

Here’s a brief look at everything Google demoed at its Google I/O 2018 Keynote.x

Healthcare

Google will be working with hospitals and medical institutions, on a system that will inform doctors 24 to 48 hours before a person needs immediate medical attention, thus giving doctors hopefully enough time to diagnose a patient, before it’s too late.

Accessibility

People, who find subtitles absolutely necessary, will find it hard when people on screen interrupt each other, without letting one another finish. Google’s Looking To Listen analyses sound and visual cues, to make subtitles more practical and easy to understand.

Google has beautifully applied machine learning in GBoard, Google’s Keyboard app, to be able to enter morse code, and get predictions and suggestions in a day-to-day language.

Compose Suggestions in Gmail

This one is a biggie!! Google calls this feature Smart Compose, where Google uses AI to predict and suggest content, when typing in Gmail. All Gmail users will get Smart Compose before the end of this month.

Google Photos

AI is to Google Photos what the Iron Man suit is to Tony Stark. AI scans photos and offers smart suggestions to the user.
Here’s a glimpse of what Google Photos will be able to do :

  • Recognize that a friend is present in a group of photos and ask to share those photos with her
  • Suggest automatic enhancements to photos, like brightness and color corrections
  • Automatically convert a photographed document to PDF, to save for later
  • One-tap black-and-white colorization

A more natural Google Assistant

Google is using AI to make Google Assistant’s voice more natural and human-like, and is adding 6 new voices, including that of the one and only John Legend.

“Hey Google” for every single question? Not anymore! You can now have a Continued Conversation (which the feature is aptly named) with the Google Assistant, as it can now know when you ask it a serious of questions.

Multiple Actions lets you ask the Google Assistant more than one thing in the same request.

Pretty Please gently nudges children to use the word ‘please’ when questioning the Google Assistant, and also lovingly commends them on doing so. Pretty Please will be available later this year.

Smart Displays

Smart Displays is just the fancy name for Google Assistant With A Face. Ask the Smart Display anything, and the Google Assistant will give you a visual response, say, a YouTube video or a cooking recipe. The Smart Displays will start selling in June.

A more visual Google Assistant

Opening up the Google Assistant and swiping up gives you a zoomed out view of your schedule, filled with smart suggestions, reminders, todos, and lists. The Google Assistant’s responses have been improved visually with, for example, fullscreen responses, smart appliance controls right within the conversation, and a cleaner improved user interface. This revamp will be available soon on both iOS and Android.

Google Duplex

If this feature doesn’t impress you, nothing will. Suppose you ask the Google Assistant to book an appointment for you, with a service that doesn’t have an online booking portal. The Google Assistant will automatically call the service, and literally have a voice call conversation with whoever picks up, and will book you an appointment with that service. And when I say “literally”, I mean it literally :)

This feature is a big boon for services which do not have clear opening and closing hours listed on Google or on Google Maps, and as a result, receive tons of calls from people requesting that information. Google, in this case, can just call all those services, and automatically update the information on the individual services’ google landing pages, saving the services from answering a bazillion phone.

Google News

You can choose to see world headlines, or just what AI has automatically chosen what it thinks you would like to read, and gives you just that. Nothing more, nothing less.
Newscasts and Full Coverage make use of AI to understand the context and to bring together multiple articles on any particular topic, so you get to know the complete story, and not just what one article has to say.
Subscribe With Google allows you to subscribe to any paid content, through your Google account, and then access it anywhere.
All of these features are available starting today.

Android P

Android P now learns how you use your devices and smartly does stuff based on your usage patterns, such as allocating resources only for apps which you’re most likely to use next, placing suggestions on your home screen, or automatically adjusting your screen brightness based on your personalized preference.

Android P can display just a part (or a Slice, as Google calls it) of an app, anywhere in the operating system, wherever you need it.

If you wanna try out the Android P, there’s a beta you can, if you have one of these supported Android phones.

Digital Wellbeing

The Android Dashboard in Google will give you a comprehensive overview of your time spent in all the apps.

Users can opt in for other tiny features that can improve their wellbeing, like allowing apps in which users spend too much time to nudge you to take a break, or to group multiple notifications together, instead of many individual notifications.

Android P greys out app icons or the whole screen if needed, whenever you need to take a break from your phone.

Shush is a cute feature that turns on the Do Not Disturb mode, by just putting your phone face down.

ML Kit

This is a list of APIs available to developers, to do a lot of AI-related stuff like image labeling, text recognition, face detection, barcode scanning, landmark detection, and smart replies.

Google Maps

Walking is much easier and fun with Googe Maps powered by AI. AI uses the camera to combine Street View and Maps, to instantly figure out your directions, where you are, and to see nearby places.

Google Lens

Google Lens now allows you to select the text you see through your camera, and just copy paste it, or Google it for more information.
Google Lens also processes everything it sees, in real time, so you get instant information about anything you see through your camera.

Google Lens will soon be available on other third-party devices, right in the camera app.

Self-driving cars

Waymo, Google’s sibling, uses AI to perceive, predict, and make decisions. Even the disturbance caused by rough weather becomes negligible when machine learning takes over. All of these make Waymo’s goal to have a driverless transportation service by the end of the year more realistic and very attainable.

As every I/O event, Google’s I/O 2018 is filled with goodies. Google’s use of AI in almost every possible way just puts the company on stilts, letting it take much larger steps forward each time. For me, the Smart Compose in Gmail and Google Duplex, where the Google Assistant talks over voice call with a service representative, are definitely the cherries on the cake. There are obviously tons of stuff Google has stuffed into Android P, which I wasn’t able to cover here. Now, that Google has made the Android P Beta available for other third-party devices, we’ll get our hands on Android P much much sooner.

If you want to watch the full keynote, you can see it here.

You have just enough time only to see the highlights? Check this out!

I can’t wait to see what Apple has in store for us, given that WWDC 2018 is less than a month away!

I’ll see you guys again with the WWDC 2018 summary.

Thanks for reading!

--

--

Charles Samuel D'Monte

Book-lover, Tech-lover, Amateur Photographer, Always-looking-for-something-to-write-about-er