Buğra Oral
Commencis
Published in
11 min readMay 30, 2016

--

Google I/O 16 — A Long Summary

Google I/O 2016 was held between 18 and 20th of May this year in Shoreline Amphitheatre in Mountain View, and we were lucky enough to witness what Google has to offer this year.

First of all, after 10 years, Google moved I/O back to Mountain View and the organization was not good. It was very hard to attend the sessions, and if you are determined enough, you could get into the session that you are interested after waiting an hour outside the stage under the burning sun. I must admit that organizing a conference with 7000 attendees is tough, but still could have been better.

Aside from organizational mishaps, the content delivered was promising. I must stress that having a worldwide conference with people attending from different disciplines, everybody had different expectations about content and level of technical detail. Let’s not forget that this is mainly a developer conference. Given the nature of attendees, the content was well balanced. Although I would have appreciated more if thorough explanations and more technical sessions were provided.

As usual, Sundar Pichai, CEO of Google, guided us through the keynote giving the basis of their work and the underlying key elements.

Aside from the new products and platform updates, the emphasis was on Machine Learning(ML) and Natural Language Processing(NLP), which I personally think, were under appreciated. Almost all the interesting updates and the new products utilizes this developments. From voice recognition to Google Fit, Google Assistant, Tango and more is possible because of the recent progress made in this area. One last thing to keep in mind before drilling down to products is the new Tensor Processing Units, a new processor designed specifically for ML tasks and TensorFlow (Google’s Machine Learning API). Sundar Pichai claimed, the processing power was boosted by an order of magnitude and this is the how Google answers any type of semantic query within milliseconds. These processing units has been used for over a year for all ML tasks, including AlphaGo.

Let’s drill down to new products and platform updates,

Google Assistant, is kind of like the new Google Now which merges all capabilities of Google into a two way dialog. It is everywhere from the search to Android, Home and messaging. It can answer context aware questions, which is a big challenge for any assistant system. The suggestions are tailored for the user and it can also take action like purchasing tickets or making reservations.

Google Home is the newly announced hardware product. It is basically Google Assistant in a vase, where you can put anywhere in your house to help you manage day to day tasks. It is something very close to home assistant robots we saw in the movies in 90s, except it can’t move, yet?.

The interesting point of Google Home is, it can manage the devices like your TV, music player and potentially any device connected to your network. Last year at I/O, Weave and Brillo were announced as a ground floor for Internet of Things. Google Home could potentially become the center of IoT devices we are going to see at home.

Allo with Google Assistant

Allo is yet another messaging app, it is intelligent with Google Assistant built in. Smart replies powered by the Assistant can generate replies for in context answers, including responding to a image message. Suggestions and actions can be used at any point of the conversation and they will be context aware too.

There are some concerns about privacy, there will be end-to-end encryption but not enabled by default, kind of like incognito mode in Chrome. The statement is open to discussion.

All these suggestion features could become a big part of our lives, but we are going to see if they will violate any the ethical boundaries and try to manipulate us as explained in Tristan Harris’s post.

Duo is a new video call app, much like Apple’s FaceTime. It is phone number based. Looks to be designed to become a rival to FaceTime, which does not support Android, where Duo supports both. This is a key point to steal the show as we consider Android’s usage over iOS worldwide. A cool differentiator feature is called Knock Knock, which is basically seeing the person calling you, before answering it. It could prove to be very useful to understand why someone is calling you, their environment, mood or any perception we can pick up from their stream beforehand.

For Allo & Duo the success criteria will be if they can replace or surpass the existing solutions.

Virtual Reality is happening.

DayDream is a new VR platform built on top of Android N. It will be the platform us developers to work on to deliver VR apps. The grand challenge bringing phones VR capabilities is something called motion-photon ratio, which is in a nutshell how well we can experience VR without low/no lag, jumps and good resolution. So, they have draw out some specs called DayDream Ready to ensure all devices work seamlessly. These devices include a phone, a headset and a controller. The blueprints are handed out the Google’s partners and we will see these devices in fall 2016.

The game engines Unreal Engine and Unity Engine have already started working on them, Unreal Engine is ready to go and Unity has a plugin to utilise among side the existing IDE. A virtual version of Play Store is already developed.

Project Tango is the most exciting part of VR. The project started in 2012 and announced around 2014. It is not new, but the session was a very big wow for any participant. They focus on perception of space and movement including area learning, motion tracking response to movement, depth perception, augmented reality and image processing. During the demo, they performed all these tasks with examples from indoor navigation, games, furniture replacement and summoned dinosaurs from past right into the stage. In not so distant future, this developments will shape game and cinematic industry and inspire new creative apps.

Firebase, Firebase, Firebase

Firebase was originally a real time cloud database launched in 2012 and acquired by Google in 2014. Today, it is explained as “The tools and infrastructure you need to build better apps and grow successful business”. From a providing a comprehensive backend service to analytics and push messages, it is simply web service made easy. Google seems to be getting behind the product full force, around 30 sessions were presented for all audience groups; especially, for developers, product managers and marketers.

The platform out of the box provides web services, a great console, targeted push notification, analytics, crash reporting, testing, authentication, app indexing, ads and many more features. It looks to be the golden sword of startups, games and freelancers. With all the promising aspects, we are going to see how well it will be adapted by large corporations.

They emphasize on performance and scalability along with ease of use, and to prove their point there was a session in which they built a cross platform messaging app (android,ios and web) in 40 minutes.

Google Cloud Messaging is also moved into Firebase, renaming it to Firebase Cloud Messaging.

Buzzfeed app being loaded as the user clicks the link

With Instant Apps when a user navigates to a mobile web page, we now have the ability to go directly to the mobile app, even if it is not installed on the device. This is possible thanks to deep linking and app modularization by downloading the app partially and running it as any native app would with just-in-time compilation. In the long run, we might see a big shift from mobile web to native apps with this influence.

The new version of search will able to show us the result pages as previews on the search result page, and in a very fast fashion using amplified mobile pages and we will be able to swipe through before going to the actual page itself.

Alright let’s talk mobile, ummm, I mean Android;

Android’s new version N was made available as a beta during the conference. We are already familiar with the new features, atypically announced a couple of months earlier. In the I/O we learned the why’s and details.

There are huge differences how the apps run in N. Previously the apps were compiled during installation and the code was stored in device, which resulted apps taking huge spaces when installed and big pains when updating because the code had to be recompiled (~20 mins update time). With N the compilation will be made just when the code needs to be executed with keeping the overhead time needed for compilation at bay with a hybrid runtime model. In a nutshell, there will be speed difference when running apps. They claim to hit a sweet spot, we will see how much they improved with the final release.

Multiwindow feature was the eye candy of the early release, and it continues to shine. While having this feature will be great as being experienced it with other versions of Android (Samsung had it for a while now), native apps in Chrome will be resizeable as well.

Wait, native apps in Chrome?

Yes, during a Chrome session they announced that Chromebooks will have Play Store and can download any Android app as long as the developer permits. The multiwindow and resizing features comes in handy at this point.

Play Store running on Chromebook

There is a detail many people have forgotten over the years. ARC Welder was announced a couple of years back to run android apps in chrome, either your device is a Windows, Mac, Linux or Chromebook. We have checked this out awhile back and our apps in Monitise runs almost with no problems in our development machines. This means that we are going to see Android apps in our desktop machines in not so distant future.

With Android Wear 2.0, the apps can now be stand alone wearable apps and the devices gain phone like capabilities with wireless carrier networks. Devices no longer requires a phone to be nearby anymore, which means leave your phone at home while going for a run, everything will be synced. There will also be a Play store for wearables.

The new system comes with a brand new system ui, along with new input methods. We can now draw characters on the screen for writing or use keyboard.

A new material design guideline for wearables is published, which every developer and designer should checkout.

Watchface with data from different apps

The watchface system is also evolved to support multiple apps on display, through new Complications API. It is much like widgets in a more slimmed down fashion.

Android Pay is also evolving and expanding. It is now available in UK. It will continue to expand through Australia and Singapore in the upcoming months. They have added simple checkout flows to both mobile web and apps, with a simple button an authentication dialog pops out asking for fingerprint and the transaction gets completed. Hopefully, we will see more adaption and more countries.

There are two great use cases demonstrated in the session. We can now withdraw cash from Bank of America ATM’s with a simple tap using NFC and pay for the tube in London, again with a simple tap.

We are not ready to leave our credit cards at home just yet because Pay requires special terminals, but companies like Walgreens now accepts Android Pay.

Let’s explore what’s new developer tools

Android Studio 2.2 preview is out. The most sensational feature for me is test recording. We can generate Espresso Unit tests using the actual app, and it will create test cases by tracking our clicks, gestures and inputs. Writing test for Android was a big problem for many years, hopefully this feature will solve a greater part of it.

Constraint Layout

There is a new layout called Constraint Layout, which aims to simplify layout hierarchies and provide a rich graphical user interface. While the graphical user interface looks promising for new Android learners, experienced Android developers probably won’t be using it too much. In addition, the graphical tool for organizing layouts is greatly improved.

The tools team built in a apk analyser tool, to explore your apk for optimizations and bad practices. Also a Manifest analyser tool is added so that we can inspect the rules enforced by libraries and module more clearly.

Awareness API is introduced to make context aware abilities public. Using this API, apps can now react to user’s current situation, get an idea of what the user is up to, such as headphone replacement, nearby places, weather, beacons and so on. For example with Fences API an event can be fired when a contextual condition is met, “The user is working out and the headphones are plugged” could be a good time for Spotify to play a workout playlist.

In addition to all these great products and APIs, Google’s ATAP team announced very intriguing and potentially ground shaking new products which were surprisingly not a big part of the conference.

Project Soli Controls

Project Soli is a sensor which tracks micro motions using a technology similar to radar. This could be a game changer because the only input device is the human hands. It’s expected to be used with wearables first and grow from there. Imaging scrolling down in a list with rubbing your fingers is mind boggling. Developer kits will be released in 2017.

Project Ara is a modular phone, in which you can subtract and replace any component other than CPU, RAM and speaker. The idea was introduced a while back under the name PhoneBloks a while back, and gained public interest very quickly. A developer edition phone will be out in fall 2016 and the full release will be in 2017 with creating many market opportunities both for consumers and component manufacturers.

Project Jacquard is a wearable tech embedded in clothes, the fabric acts a touchpad. Potential area of use is managing your phone when you are on the move. First jacket will be released in partnership with Levi’s in 2017.

Project Abacus is a new way of authentication. The aim is to eliminate passwords, fingerprints and PIN’s. By collecting data in the background and combining user’s typing, walking, speed and voice patterns, facial recognition and more it creates a Trust Score. Based on this score, the system chooses whether or not to let a user in. Potentially this could override two factor authentications and could be used in banking apps as well. The API tests will start in June 2016 with banks and it will be available for developers by the end of 2016.

To summarize, this year at I/O, apart from organizational problems and a few sunburns, we had a lot of updates and new products. We don’t have a wow product now, but there are numerous good updates and promising new projects and continuous improvements.

--

--