How Google is using AI to unite software with its new hardware

Clark Boyd
Towards Data Science
4 min readOct 6, 2017

--

Google has made a raft of eagerly anticipated hardware announcements, including a new Pixel smartphone, a smaller Google Home, a bigger Google Home speaker, a laptop and some earbuds.

It should be clear to even the most casual of observers that Google is getting serious about hardware. This marks a change in direction for a company that had initially relied on its open source Android software to influence the smartphone market.

The software approach led to too much fragmentation; it’s hard to tackle Apple head on without a self-contained ecosystem of products and services.

Hardware alone would be insufficient. Moore’s Law no longer applies; CPUs are not improving at the rate they once were. New areas of growth need to be identified for a tech company to dominate the modern landscape.

Google is betting on its AI-powered technology to achieve exactly this: an area of competitive advantage that can unify its hardware and software.

The message was clear throughout the event and this is an area where Google knows it has stolen a march on the competition.

“In an AI first world, computers should adapt to help people live their lives” -Google CEO, Sundar Pichai

A recent study by Chinese scientists found that Google’s AI had an IQ more than twice as high as its Apple equivalent. It still lags behind the average person, but the gap is decreasing every year. The gap between Google and the rest, on the other hand, increases every year.

Google can now build AI products directly into its hardware, which offers consumers benefits in terms of both speed and privacy.

Tensor Processing Units

Most of the work will be done in the cloud, of course, which allows Google the opportunity to use such a vast set of data to train and improve its processes. The recent launch of Tensor Processing Units (TPUs), designed specifically to deliver machine learning at massive scale, has set Google up for the future in this regard.

The Pixel phone is the flagship product and it will bring the AI-powered Assistant to the fore.

Google Assistant will communicate directly with Google’s servers and it is an ever-present fixture across such what is now a wide array of Google devices.

The phone’s camera is developed with AR and VR very much in mind, although it would be fair to say Google is more focused on the former in the short term.

“People on Pixel take twice as many pictures as iPhone users, and they’re storing around 23GB of images a year.” — Brian Rakowski, Google

Google Clips is an interesting development in this regard. Clips is an example of Google building AI directly into its hardware; in this case, hardware that will automatically identify and capture significant moments. However good the Pixel camera is (and it is said to be very, very good indeed), it is still reliant on a flawed human to make adequate use of it.

Clips removes that factor altogether and it syncs to both the Pixel phone and to the cloud-based Google Photos storage platform. Photos, of course, makes plentiful use of machine learning and AI to interpret pictures, removing imperfections and tying together thematically linked images.

Perhaps the most intriguing announcement was Pixel Buds, Google’s rival to Apple’s AirPods. These earbuds are not entirely wireless (they are joined to each other by a cord), but they do come with a range of AirPod-beating features.

Users can turn music up, down, or off by touch, and can also call Google Assistant by placing a finger on one of the Buds. This will be welcomed by people who find it a little stilted to use the “Ok, Google” prompt in public.

Again, Pixel Buds allow Google to use AI to bring together its hardware and software, even offering an instant translation service in 40 languages. This has been tested hands on in a range of languages (including Swedish, as seen in the live demo below), and the early reports are very positive.

All of the above makes explicit Google’s business strategy.

Owning the hardware allows the search giant to integrate its artificial intelligence and machine learning systems directly into the products.

Consumers will be willing to part with their personal data because it provides such a direct benefit to them, through much-improved and personalized services.

All of that data is fuel for Google’s machine learning systems, which will only improve in accuracy over time as they learn from feedback. This will create new opportunities to monetize search, which will remain Google’s cash cow for some time to come.

Of course, the full impact of this ecosystem is only really felt when people are immersed in it totally. A Pixel smartphone will be reduced in capacity if paired with Beats headphones and an Amazon Echo, for example.

Whether this bet will pay off remains to be seen, but Google is confident that its superior AI technologies will convince consumers to make the switch.

--

--

Towards Data Science
Towards Data Science

Published in Towards Data Science

Your home for data science and AI. The world’s leading publication for data science, data analytics, data engineering, machine learning, and artificial intelligence professionals.

Clark Boyd
Clark Boyd

Written by Clark Boyd

Tech/business writer, CEO (Novela), lecturer (Columbia), and data analyst. >500k views on Medium. I used to be with it, but then they changed what ‘it’ was.

Responses (1)