My Google I/O 2018 Favorites

Victoria Gonda
victoriagonda
Published in
5 min readMay 9, 2018

--

This is my first time at Google I/O and I’m having an amazing time. I’m excited about all of the new things Google has to offer. Starting with the keynote, there are some great things being discussed. An underlying theme is how they are using machine learning and artificial intelligence to help in all areas of life including mobility, communication, and information gathering. Here is some of the things the stuck out to me in the keynote that I’m excited to share.

Accessibility

Starting off with a topic I find very important, I appreciated the efforts they are making for accessibility. Additions to the keyboard to provide and improve support for morse code makes communication much easier for people with mobility impairments. They are also improving the text predictions for these users, and using machine learning to facilitate more ways to improve their ease of life. I could go on all day about accessibility, so I’ll leave the rest for another post.

Assistant

Improvements are also being made to the Google Assistant. It is becoming more and more conversational. In the coming times, move voice options are being added. Machine learning is being used to make the voices sound more natural, with less recording time in the studio. In addition, they shared Continuous Conversation. With it, you don’t have to say “Hey, Google” before each command in a conversation. Other improvements include being able to include multiple commands in the same sentence, or interaction, and usability by more user types.

Another thing that stuck out to me is the ability for the assistant to complete phone calls for you. Be asking to assistant to schedule an appointment for you on a given day, the assistant can complete a call in the background for you, using natural conversation to speak to the human on the other side of the line.

Visual Assistant

There is also a visual aspect of the assistant being added. On specific visual assistant devices, it combines what we know from the assistant with a rich visual experience. For example, you can view information about a recipe you’re cooking while you use the device in the kitchen.

When you’re using the assistant on a mobile device, there are visual improvements here as well. When requesting information, the full visual space on the screen is used to supplement the audio response.

Other times when visuals should be minimal, such as while driving, there are fewer visual cues to distract the driver. Assistant is being added to maps for navigation assistance like this.

Digital Well-being

Google also wants to help with our digital well-being. As useful as our devices are, they can also distract us from being fully present in our lives. The keynote included tools to help us be more aware of how much time we are spending on our devices, and provide reminders to take a break when we’ve been on our devices for too long. One of the features I’m particularly excited about is an addition to Do Not Disturb where the visual notifications are hidden, in addition to them being audibly silent.

News

For me it’s important to stay informed on current events, taking in information from multiple sources with the knowledge that everything has bias. Google is making this even easier as part of their Google News. By collecting information from multiple trusted sources, and bringing them together in one location, it is easier to stay informed.

Android P

It was exciting to see some of the new things coming in Android P. I’m interested in the simpler navigation, having a single soft key for both home, recent apps, and all apps. I think this design looks cleaner, and I’m wondering about the usability implications, especially as people are learning the actions to use it.

Having smart battery allocation to help save your battery through knowing what apps you use the most, and smarter adaptive screen brightness are also things I’m looking forward to.

From a developer standpoint, I’m really interested in using app actions and Slices as entry points into our apps. By being able to link your app from interactions in Google, I think we all can make navigation easier for our users, and improve app engagement.

You can try out P Beta now at android.com/beta.

MLKit

Google also introduced MLKit. With it you can take advantage of their APIs to add things such as image labeling, face recognition, and smart reply to your app without needed to do the machine learning yourself. I’m excited for the possibilities for this

Maps

Maps are also being improved. Using satellite images, establishments are able to be added in rapidly developing areas, and on unnamed roads. There is also better ETA precision taking into account the type of vehicle.

The feature I’m most excited to use is the way they’re integrating the camera. Often I use public transportation, get off at an unfamiliar stop, and am unsure which direction I need to be heading. By using the camera, and analyzing your surroundings, it can tell you what direction you’re facing, with the map still at the bottom of the screen.

Waymo

There was also information shared about their self driving cars, and self driving car ride sharing service they have been testing out in Arizona. Deep learning is being used to make even better predictions, and safer “drivers.” The newest efforts include navigating in bad weather, such as snow, and being able to remove the noise that precipitation can create while collecting data.

These were only a partial list of things that were shared in the keynote. I encourage you to watch the keynote, along with the other recorded talks to join in with the excitement.

--

--