Hardware, software, AI combined Google will predict our actions based on massive amount of data they collect and run algorithms on. The software will learn and adapt continuously. During October 4th keynote Sundar Pichai gave an example of Google Calendar that switches from weekly to daily view based on what day and time it is and another example of how doctor’s appointment is treated completely differently compared your daily commute reminder.
Google will beat Apple at its own game with superior AI
Sergey Ross
6138

If this is the selling point, it is not working. This does not look appealing at all. Quite the opposite: look like a promise of a product annoying and difficult to use. Switching between daily view and weekly view is not a problem that requires solving. I have absolutely no problem doing it myself and would actually get annoyed if something would start doing this for me.

After all, people do not think lineally as this pattern of ‘lerning and adapting continuously’ suggests. The practical result would likely be that the machine learning would pick up some subtle pattern that you yourself are not conscious about and would start sticking it back to you. Think of a sublte sub-conscious nervous tick that somebody points back to you all the time. And that’s a better scenario. What if your use-pattern is mostly random, depending on a multitude of factors (ranging from mood change to specific situations (e.g. your device being picked up by your partner or a kid). How would machine learning account for those?

But that is just an isolated example. The larger picture is the same it was for at least half decade now: Google does not get individual user experience. Granted, Google search is convinient. But it is actually not that demanding in terms of user interface. It does not require (as compared to other products) that much that much interms of its design.

But when you think of all other products, especially original google developments — there is a rich history of miscalculation, misunderstanding and bad design sollutions. Remember iGoogle, Google+, original android prototypes (before 1G iPhone got announced), Google Wave. There are also examples when Google for nor clear reason botched up perfectly functional and simple user-friendly products (Gmail Google Talk mutating into Hangouts and evolution of Google Maps (widely used in absence of viable alternatives, but its prime in term of design and user-friendliness probably peaked around 2010–2012).

There is no doubt that Google is spearheading machine learning and could leverage it to improve lots of its other products. But saying that it will use ML to improve individual end-user experience (basically design-related issues) is just proves that Google does not understand how people interact with the products and what they like.

ML is fit for many purposes, but it is highly unlikely that improving user interfaces and experiences is one of those. Realistically at this stage it is just a ‘mambo jumbo’ talk like: ‘we do not really know what to do, but we would apply ML and maybe some magic pattern would emerge and then we will know’. It does not work like that — there has to be a clear road map, where ML would be a tool for moving on such road.

Ideally such road map indeed may be developed. But even in such case, use of ML-assisted solutions would be an overkill. It would be like using a driverless car to draw pictures on tarmack with burning tires. Obviously, you could do that, but that’s hardly a good and efficient use of a driverless car.

One clap, two clap, three clap, forty?

By clapping more or less, you can signal to us which stories really stand out.