Google Cloud Next — Amsterdam: a World-Class event of technological conglomeration

Krishna Kanth
Beginner @ Data Science
6 min readJun 25, 2017

The grand event of Cloud Computing organized by the world’s top most innovative company, Google, in Amsterdam on 21st June of this year has resulted in a great day of technological and computational wonders.

The day took off with an interesting keynote address from Google’s top management where they presented the audience with various glimpses into the company’s latest advances in the fields of Cloud Computing, Machine Learning and Artificial Intelligence. They shared some of the ideas behind Google’s most innovative advances in the current technological trends of data processing and analytics. The enterprise level services offered by Google in the form of G Suite, bringing together the company’s highest used cloud applications, to customize and cater a plethora of outstanding computational services to the industry was at the heart of the event.

I’m personally interested in Data Science and Machine Learning, so I stuck myself to the event’s schedule under Breakout 2 which hosted various Big Data and Machine Learning related tech talks from Googlers. Apart from these, I also attended one talk by Julien Blanchez, Google’s Global Security & Compliance Strategist, about securing the company’s unimaginable volumes of data, and the policies and processes involved in doing so. He explained at length how important it is for Google to conjoin the data laws of different parts of the world, such as the USA, and the EU, and strike a balance between all of them to provide a unified data protection policy for all.

The whole event has transformed into an insightful look into the power and might of data and its analytical study to solve real world problems using a computer in an automated fashion. The process of making a machine able by itself to solve problems automatically is called Machine Learning. Google is one of the world’s top players in the game of Artificial Intelligence, and this event gave us various reasons why.

Some of the examples of the company’s milestone achievements in the field of Machine Learning are demonstrated in the form of various APIs.

Data Loss Prevention API

The DLP API lets you understand and manage sensitive data. It provides fast, scalable classification and optional redaction for sensitive data elements like credit card numbers, names, social security numbers, passport numbers, US and selected international driver’s license numbers, phone numbers, and more. -Google

Vision API

Google Cloud Vision API enables developers to understand the content of an image by encapsulating powerful machine learning models in an easy to use REST API. -Google

Video Intelligence API

Google Cloud Video Intelligence API makes videos searchable, and discoverable, by extracting metadata with an easy to use REST API. You can now search every moment of every video file in your catalog and find every occurrence as well as its significance. -Google

There was a demonstration of the Data Loss Protection (DLP) API where the presenter showed us the chat history of a Google user with a customer service executive where she was requested some personal details such as her full name, date of birth and etc. In response, rather than just providing only the requested information, she uploaded her passport copy into the chat. This means she has provided sensitive personal information about herself. Usually, the customer service chats are stored in the servers for various training purposes. But this particular chat can’t be stored because the user has provided an excess of personal and sensitive information and it has to be deleted to abide with privacy laws. This would mean data loss for the company.

So, Google came up with an API that takes data into it and removes all kinds of sensitive information from it before storing it into the servers. To demonstrate this, the presenter has scanned a passport and sent the scanned image through the DLP API installed page and the resulting picture presented itself with all the private information of the person crossed out completely in red: the picture, social security number, address and other private date of the passport holder. This information has been automatically recognized by the computer and removed. The computer has figured out by itself which parts of the uploaded image are illegal to store and removed them from display and made the data available to storing without breaching any privacy laws. It was fantastic.

I came across another demonstration of Machine Learning where there was a large touch screen TV which prompts the name of an object and the user has to draw it on the screen with his hands in 20 seconds. While he is drawing the object, the algorithms and APIs behind the application tries to guess the name of the object. This was made possible by the algorithms that have trained the machine (computer) to learn how to guess the object by studying millions of inputs of various drawings by the users all around the world. This is one of best depictions of Machine Learning and Artificial Intelligence. When I stood in front of the screen it prompted me to draw the Eiffel Tower and when I was doing so, it has guessed it correctly by studying various similar drawings of Eiffel Tower by other users from historical data. Take a look at the below picture to get a glimpse of this.

I’m deeply interested in working towards environmental sustenance, and I wanted to know how these new innovations can be put to use towards saving tackling the biggest threat of our generation: Climate Change. So, after one of the talks about these APIs, I walked towards the presenter and asked him how can these new innovations in Machine Learning and AI can be harnessed to fight issues such as Climate Change and Global Warming. He informed me that these APIs are already in use in that direction and gave me an example. The Video Intelligence API is installed on various surveillance drones that record live videos of Solar Power Plants to monitor their working. These live video recordings are analyzed by the installed API to identify the solar panels that are faulty or damaged and are not contributing to the conversion of solar radiation into clean electricity. This quick information helps in flagging the concerned technical authorities about the defects in these solar cells, thus helping them in replacing them at the earliest possible time in order to help the process of clean energy generation going.

It felt awe inspiring to have these glimpses into Artificial Intelligence.

With all of these experiences, my interest for Data Science and Analytics took a new shape. By the end of the day, I felt revitalized to try out new and creative things in those fields in the coming days. When stepping out of the event premises in the evening, I felt myself equipped with better knowledge of the radical innovation happening all around. And it somehow soothed me, and made me anticipate a great deal of things ahead.

--

--