Google I/O: Building a more helpful Google for everyone
Google I/O, this name has been vastly heard among all the developers in the world. It’s a mega event and annual developer conference held by Google every year and this year on May 7–9 at Shoreline Amphitheatre in Mountain View, CA.
I/O brings together developers from around the globe for talks, hands-on learning with Google experts, and the first look at Google’s latest developer products, hardware, software, platform, and innovations.
I/O mission is to “Organise the World’s information and make it Universally Accessible and Functional.”
That doesn’t mean that this article is not for you even a common lifestyle is enhanced by Google’s Product. Do you want to know how? Let’s see this video.
That was great. Loved it!! So let’s know about Google I/O.
I/O was inaugurated in 2008, and is organized by the executive team with the slogan “Innovation in the Open”. It took place on May 28th and 29th in 2008, at the Moscone West convention center in the heart of San Francisco. It was the first developer’s conference for Google, who at the time was still mostly a search engine. Google announced Android at Google I/O 2008, and it remains their biggest venture ever. Google also debuted the Google Apps Engine, OpenSocial and Google Gears.
Then Google I/O 2009 and many more went onto the tracks of enhancement and this year Google I/O 2019 in its two-hour keynote at the annual Google I/O Conference, the company announced a ton of stuff it’s been working on over the last year and made enormous announcements related to Google Lens, Incognito mode in Google maps, Live captioning and Transcribing, Duplex on Web, Data Privacy, Augmented Reality, Google Pixel 3 and 3 XL, Android Q beta and many more.
This year only Google wrapped up Cloud Next at San Francisco with about 30,000 attendees and millions online on Youtube.
If you are curious then we have a Table of Content for you.
- Google Pixel 3 and 3 XL
2. Augmented reality in search
3. Google Lens
4. Duplex on Web
5. Incognito Mode in Google Maps
6. Android Q beta
7. The podcast to Google News
8. Next Hub and Next Hub Max
9. Project Euphonia
10. Live Relay(Federated Learning)
*Google Pixel 3 and 3 XL*
Google announced budget variants of its Pixel 3A and Pixel 3A XL smartphones priced at Rs 39,999 and Rs 44,999 respectively. The Pixel 3A comes with a 5.6-inch OLED screen with FHD+ resolution and Pixel 3A XL has a large 6-inch OLED display panel. Both the phones run on a mid-range Qualcomm Snapdragon 670 SoC. When compared to the high-end Pixel 3A devices, the Pixel 3A XL has duo lack Pixel Visual Core, Daydream VR, waterproofing and wireless charging.
But what Pixel is really known for is its incredible camera and with its software optimizations, Google bought exclusive camera features and our industry-leading image quality, it can take amazing images in low light with night sight.
There are interesting innovations on the way one is AR with Google Maps in Pixel which will give you the view in the real world to tell you where to turn next.
*Augmented Reality in Search*
Whether you are trying to learn something or teach something seeing and visualizing is most often what makes your learning permanent so for that now you can visualize it in a 3D model right in front of you.
Now through AR and Computer Vision, the camera in your hands is turning into a powerful visual tool. You can view a 3D model built by the visible body in the Google search itself, not only that you can also place it on your own space right in front of you.
Now you can look for your dresses, shoes and many more and visualize them from different angles in a 3D model and also place them in your environment visually.
*Google Lens upgrades*
Google Lens is now more upgraded and learning a few new tricks. Point Google Lens at a restaurant’s menu, and it’ll highlight the most popular items right on the menu and tap at any item and take a look at how does your dish looks like. Point it at your receipt, and it’ll automatically calculate things like tips and totals. Also with Lens, you can snap a photo of a sign in a foreign language and it will translate it to your chosen language. The translated text will superimpose right on top of the original sign.
Now you can access Google Lens through the app as well as Assistant, Google Photos and Camera in many Android Devices. Now you can index the physical world.
* Duplex on Web*
Duplex, an AI-powered customer service tool meant to help small businesses (like restaurants and hair salons) field more phone calls, answering common questions and scheduling reservations or appointments.
Duplex, which is life in 44 states, is now expanding beyond the phone and voice to the web. Let’s say, for example, you get a calendar reminder about an upcoming trip. Today when you are booking a reservation, you have to navigate through number pages in steps, filling out information and making the selection along the way, it’s time-consuming also leaves the businesses out slow. Now you can then ask Google to rent a car for your next trip. Duplex will then open the car rental company’s website and fill out all of the necessary information automatically on your behalf. Before submitting the request, you can double-check the details. This is very helpful to both users and small businesses.
*Incognito Mode and Data Privacy*
Being able to browse the web without leaving the usual local traces is a valuable tool for a lot of reasons, and now you can do the same thing in Google Maps. Incognito mode, announced at Google I/O, prevents any movements or searches in Maps from being linked to your account or stored locally.
It’s easy to turn on and turn off: Simply tap your profile picture in the top right, and in the few big options immediately presented should be incognito mode. Do the same thing to turn it off.
Just as with privacy modes in web browsers and Youtube, the protection this offers is limited but may be crucial. The feature will be soon rolling out to Maps users soon. Also for Data privacy, Google is using TensorFlow to significantly reduce the phishing attacks in Gmail.
*Android Q beta*
Today there are around 2.5 billion active Android devices. The coming in of Android Q is data security, privacy, and innovation.
They are foldable and Android Q maximizes the screen according to the application you use and size of your screen. They are very helpful for multi-purpose work. Another innovation is 5G with faster speed and low latency. Today loads of mobile content are nowhere near as accessible. Live Caption in Q takes audio and instantly turns it into text. I found it amazing and helpful for me because for the first time I got to know about live caption is just a few minutes before the start of my DSC Lead Interview on Google Meet. I was energized and switched on the live caption through a single click, by this I was able to read the words which don’t sound me clearly. Yah! with no latency. This also doesn’t need any internet connection. That made me feel one thing, that “really Google is awesome”.
You have also get smart replies and not only that, actions on your apps and emoji are also are through one click saves your time. Now the Dark theme is present in Android Q and that will save your battery. You also get new location controls with Q by limiting a particular app to use location services while I am using this app, Always or Deny.
You also get parental control and review the history, review the apps that your child is using, limiting the time for screen or for a particular app.
*Podcast to Google News*
The billions of questions on Google search users trust every day. Last year they have made a new feature in Google News called full coverage and bring this directly to Google Search. Using Machine Learning Algorithms to give the best worthy relevant news and updates from all the sources. Also, you would be seeing the timeline of events related to your search in Google Search itself.
As well all know Podcasts are the most source of information which not only provides you the detailed info but also saves your precious time. By indexing Podcasts, you can search relevant episodes based on title as well as content, You can tap to listen to it right there or you can save it to watch later.
*Next Hub and Next Hub Max*
The smart home of today is fragmented and frustrating. Your home is the most fruitful place for you. So bringing you the best smart gadget at your home.
The Google Home Hub is being rebranded as the “Nest Hub,” with the price dropping from $149 to $129.
Also, it’s getting a bigger brother: take the Nest Hub, bump the display up from 7″ to 10″ and add a camera — that’s the Nest Hub Max. The Nest Hub Max will tie into the Nest app, allowing it to function like any other Nest cam. The hardware switch on the back disables the camera/microphone electrically. That means you have your data privacy in your hands. It’ll cost $229 and ship this summer.
The most lovable thing is “A new Face Match feature on the Nest Hub Max will recognize your face to customize its responses”. Face Match’s facial recognition is processed locally with on-device machine learning, so the camera data never leaves the device.
*Google’s “next-gen” Assistant*
In order to process speech rate, Google needs to build complex algorithms, which includes multiple machine learning models- one model maps the incoming sound bites into phonetic units, another takes and assembles these units into words and the third model predicts the likelihood of the sequence of these words. They are so complex that they need 100GB of storage and network connections. This seems to put the entire Google data center into your pocket which looks a complex computer science problem. But further advances in Deep Learning has made Google has managed to shrink its voice recognition models down from hundreds of gigabytes to half a gigabyte, making them small enough to fit right on a phone. This removes network latency and generates a favorable experience to the users.
By storing it locally, they’re able to eliminate the latency involved with the back-and-forth pings to the cloud, making conversations with Assistant almost instantaneous. As it’s running on the device, it’ll work even in airplane mode. The new speed by firing off voice requests rapid fire, with very little delay between commands (like “Call me a Lyft” or “Turn on my flashlight”) and their resulting actions.
Also improving the mobile driving experience, it introduces the new assistant driving mode. Just put your phone in the car and say “Let’s Drive”, a minimalist/at-a-glance dashboard view that focuses on what you might need while behind the wheel, like directions to your daily spots and music control.
Also, you can open multiple apps and compose a different type of operations at the same time with no latency and not only that, you can now look for some specific photos on your Google Photos(like “my picture with trees or animals”).
* Project Euphonia*
Google is researching how it can adapt its AI voice algorithms to better understand users with speech impairments (such as those with ALS or who have had a stroke), custom tailoring its models to an individual user’s speech to better help them communicate.
“Researchers from Google AR are exploring the idea of personalized communication models that can better understand different types of speech, as well as how AI can help even those who cannot speak to communicate.”
Voice recognition technology doesn’t work today for people with speech impairments because no one has collected large enough data sets. That’s where Euphonia comes in.
In partnership with nonprofits like the ALS Therapy Development Institute and ALS Residence Initiative, Google is recording voices of people who have the neurodegenerative condition ALS.
But in order to get there, Google will need more training data. So I would like to request peoples with slurred or otherwise impaired speech to submit voice samples here.
*Live Relay(Federated Learning)*
Google always wanted to do more for users with fewer data over time, therefore they are applying the same cutting edge AI tools to enhance user privacy.
Federated Learning — this is a new approach to Machine Learning developed by Google is one example, it allows Google’s AI products to work better for everyone without collecting raw devices. Gboard is already using Federated Learning to predict the upcoming words and emoji prediction. Google has launched live transcribe and live relay by which the person with hard of hearing can talk to anyone in a phone call. See how…
That was great too…
Google’s Vision::
“Our vision to build a more helpful Google for everyone can’t be realized without our amazing global developer community. Together, we’re working to give everyone the tools to increase their knowledge, success, health, and happiness.”
For recent tweets and knowing more about the schedule, event info, agenda, Codelabs and Map of Mountain View, CA you download the Google I/O app and connect on Twitter or Facebook.
It’s a bitter truth that not everyone gets a chance to go to CA to attend this giant event live and feel the California vibes, but Google has been organizing I/O Extended in different cities of various countries through GDG present in local communities. Search about your Local Google Developer Group and connect with them. If you don’t have it in your city then you can apply for the next chapter here.
You can know when and where is I/O Extended going to happen in your city, attend this event because here you would spend your Sunday very well learning about awesome tech, earning some swags and enjoying food, networking and motivation.
As I study and live in Indore so here GDG Indore is planning to organize I/O Extended in the month of June so remain updated on various social sites like Facebook, Twitter, Meetup, LinkedIn.
So Connect me on Linkedin or Twitter and do follow me on Github.
If you Liked my article you can give a clap, if you Loved it you can endorse me on Linkedin and don’t forget to share this knowledge with your peers and others.
Thank You, friends, for spending minutes of your life with me.
Regards,
Harshit Jain
Sources : TechCrunch, Google, Wikipedia, Youtube, Twitter