Google I/O 2018 Review: Hello AI Overlords.

James Friedman
7 min readMay 10, 2018

--

Keynote Address

Throughout the course of the past 8 months, I’ve been hard at work maintaining my open source RMWC library which is based around one of Google’s technologies. I ended up getting to connect with some of the members of the Material Components team and they casually said I should drop by Google I/O this year, Google’s annual developer conference. Under normal circumstances I would’ve politely ignored that, but the timing sat conveniently between an existing trip to San Diego and a trip to Yellowstone national park. Not only did I/O sound like a much better alternative to two cross country flights, but it’s also a well known and well attended promise land for lifelong geeks such as myself.

From a first timer’s eyes

If you’ve ever been to a conference for anything, you know the drill. There is usually a keynote address to kick the whole thing off, followed by smaller topic based sessions, breakouts, plenty of booths to visit, food, and swag. I/O followed the tried and true pattern but with a small twist that it was outside at an amphitheater surrounded by large temporary tents for the sessions and small geodesic domes focused around specific topics. They launched into a press friendly keynote showcasing the product and progress their teams have made over the past year, no doubt fueled by relentless deadlines and thousands of gallons of coffee and Soylent.

Geodesic Domes

While the keynote had some “oohs”, “ahhs”, and “finally!” moments, the real show stealer was Google’s upgraded assistant. She / He / It was able to call a hair salon and a restaurant on a person’s behalf and schedule a haircut and dinner respectively with the human on the other end being none the wiser. It’s like having front row seat to the end of the world.

It’s like having a front row seat to the end of the world.

After the first keynote, we were treated to a second keynote targeted specifically at developers. I haven’t gone back and counted, but I’m fairly certain the speakers used the words machine learning and AI somewhere between 1 and 5,000,000,000 times over the course of an hour. It’s quite clear where Google’s future is headed.

A mixed bag of sessions

Sessions at I/O were given on a wide range of tech related topics by a variety of speakers from different backgrounds a fields. If you know me, I’m an excitable optimist and extreme generalist, so it was challenging for me to figure out exactly which things I might be interested in (the answer: everything). I heard sessions on AR / VR, machine learning, design theory, design application, PWA’s, web components, and about everything else you could imagine in between those. A few standouts for me:

Session on the new reactive Polymer API

The better sessions tended to be the ones that covered high level concepts and avoided getting into the weeds. Unfortunately some of the sessions tried to cover too much. For instance, a session on “High level intro to TensorFlow apis” would have resonated better if it gave me an overview of all of the things TensorFlow could actually do instead of trying to show me how to train a machine learning model in 45 minutes with concepts that, even though I’m familiar with, I had trouble following.

One thing to note, I don’t think it’s actually worth attending I/O for the sessions themselves. They’re all streamed online and most don’t offer much participation except for a Q & A at the end.

Googlers steal the show

These guys and girls are the real reason to attend I/O. You’ll find employees from a vast expanse of Google’s company with experts dealing in the latest, greatest, and bleeding edge. A lot of them were tethered to their geodesic dome showcase and you could tell that some were definitely getting tired giving the same 3 minute tech demo and having the same conversation after two days. But outside of the rote demonstrations, these people all had an incredible amount of knowledge in their specific subject areas and were more than willing to go off script and nerd out. I work in machine learning but also went in with some computer vision problems I’ve been stuck on for a while. Hearing the words “Yeah, that is a really hard problem to solve, have you thought of this?” from someone on the Cloud Vision team was the best validation I’ve had in a long time. I’m not crazy, this $*@& is hard.

There were also “Office Hours” where you could sit down one on one with a Googler about a specific topic and get direct feedback. I was able to do a Design Review with someone from the Material Design team, and even though I took her on a whirlwind tour of things I’ve been working on, she offered thoughtful and helpful feedback and was able to point me to some new parts of Material 2 that were in direct response to the pain points I was experiencing.

AI will eat the world

I would be remised if I didn’t dedicate some human generated words to this. Google is betting big on artificial intelligence, and as such, they are enabling an army of developers to bring it to the masses. A quick primer on AI if you’re not familiar with it, it is better stated as machine learning: a bunch of complex algorithms that can be taught to do things like handwriting recognition, image recognition, and natural language understanding.

The problem is, it seems to be the goto answer for everything right now and its implications are far reaching. We are enabling the intelligent machine to make certain decisions for us based on some set of data it learned on. Back in 2015 there were headlines like “Google’s racist AI labels black people as Gorilla’s”. While it was technically true that it was doing this, racist wasn’t the right word: it was biased. You see, whoever trained the algorithm fed it lots of pictures of people and told the computer “this is a human”. Unfortunately, that set of images didn’t have enough ethnic representation leading the computer improper labeling some groups of people as something other than human. What gets to me is that these algorithms were created by the best of the best in the field, and they got it wrong. What does it look like when you give this power to the average run of the mill developer who will use it just because everybody is doing it and doesn’t take the time to teach the machine the “right” things? I know I’m taking a lot of liberty here but Isaac Asimov’s “Three Laws of Robotics” break down pretty quickly when the machine doesn’t even identify someone as human.

In the Q&A after the “Designing for AI” session, I was able to ask a question to Ryan Germick, the creator of the Google assistant’s personality. After watching AI make a phone call and book a haircut, I asked whether or not he felt that AI had a responsibility to identify itself as such. “I don’t know” he remarked, “it’s definitely a good ethics question”. In the spirit of the artful dodge though, I’m sure the AI would find a way of avoiding answering that particular query.

My first time and last time

Google I/O 2018

To be honest, I think I had a bit of culture shock with the whole thing. It’s not every day that I’m surrounded by 7,000 of my fellow tech peers. I didn’t want to foot the $350 a night for a hotel so I stayed in an AirBnB hostel which I’ve come to find out is something called a “Hacker House” in the valley. It might as well have been the house off of the tv show Silicon Valley, complete with a shared workspace and all of the colorful characters.

While I had a blast on my 2 days at I/O (I had to leave a day early), I think this will be my first and last time. I highly recommend everyone who gets a chance to go should do so at least once. It’s a hefty price tag, but the overall experience and the direct connection to Googlers is worth it.

--

--