Google Wants Your Phone to See

At the Google I/O keynote today, there were a bunch of new product announcements, demos, and ideas. One of the big ideas winding through all of it is that now your phone doesn’t just have a camera, it can see.

Pulling context from family photos

This is a big shift, and it’s going to be part of security:

Just point your camera at wifi credentials to log in

And commerce:

Reviews of the shop you’re standing in front of

And travel:

Every sign is in a language you speak

It can even try to make you a better friend, by finding your friends in pictures and suggesting you send them those pictures:

Share with a single tap

You can even set Google Photos to automatically add any pictures someone takes with a set of people in them — like your kids—to your library. Which is why this guy is taking a picture with a cardboard cutout of his kids, to show it working in real time. He took this picture with ‘his kids’, and another picture, and only this one showed up in his wife’s shared library:

Seriously, this appeared to work.

It’s not just faces or words either. It’s data. This isn’t a phone number on a website. Calling those is easy. This is a phone number in a picture:

Your phone knows that’s a phone number, and it can call it

Google is first

Apple’s Photos app doesn’t do any of this. And Google is undercutting that possibility — a month before WWDC — by releasing all this for iOS too. The obvious thing here is that the more this gets used the better it becomes. Going first is a huge advantage.

And of course, that call in the last example above is a strong signal for that business. It’s a signal they wouldn’t have gotten a month ago, from someone looking at the picture and typing in the number. Now, in theory, they can connect the dots between the person taking that photo and calling and a transaction. The implications for bridging offline and online commerce are enormous.

What does this mean for me?

This shift affects both input and output, and both are important opportunities for those of us who make apps.

  1. If your app needs the user to enter data that they can find around them in the world, in the near future they’ll expect to be able to point their camera at it and capture that information for use in the app. This is a big UX shift, and we should be re-evaluating data entry from the ground up.
  2. There are huge opportunities to seamlessly use this sort of information for in-app functionality. From translation to location to suggestions to ratings, things your phone can see should inform and improve your app’s capabilities.

I can’t wait to see what tools Google makes available to make the most of this concept.