Google Cloud Next London 2017

Google’s cloud platform division held a two day conference in London on the 3rd and 4th of May, 2017. There were approximately fifty sessions in five different breakout areas over the two day conference, ranging in topic from Big Data & Machine Learning to Mobility & Devices. As well as a full schedule of lectures on the various features that Google Cloud Platform (GCP) offers, attendees had the opportunity to speak with Googlers and some of Google’s partners who have made use of GCP.

Both days began with a Keynote introduction by some of Google Cloud’s stalwart executives. Some had better stage presence than others, but all spoke passionately about their company and success. After the Keynote ended, the difficult decision of deciding which sessions to attend began as they all merited an engaged, robust audience. Building your own Google Assistant chatbot with API.AI and Cloud Functions won the first round, and we soon became privy to the latest innovations for the Google assistant and how we could leverage this technology, including how to design a Voice User Interface.

Among other sessions from day one, I attended Security First for a Mobile First Strategy, Writing Infinitely Scalable and High Performance Apps with App Engine and Winning an Oscar with Google Cloud. This last session, featuring our friends at MPC, proved incredibly interesting, particularly for someone in the VFX industry as GCP presenter, Jeff Kembler, discussed Machine Learning in connection with the use of Virtual Machines to handle significant VFX workloads.

Day two proved even more eventful than the first. During the Keynote speech, a discussion was introduced on GCP’s real time rendering of complex VFX during which they used, as an example, a short film depicting two robots fighting within a house under construction. As the image was altered, we could see the newest screen shot being rendered instantly, which the Googlers explained would have previously taken several hours.

The fascinating discussions continued in our first session of day two; Deep Learning Jumpstart: recurrent neural networks with Tensorflow. Tenserflow is Google’s open source machine learning library used by Google and many others to develop recurrent neural networks. As described by our illustrious Google presenter, a recurrent neural network is different to a neural network in that it concatenates the current state of the network to the input of the next iteration thus creating a cycle and allowing for dynamic behaviour.

At this juncture, we took a moment to speak with some of the businesses present at the conference, all of whom partner with Google and have implemented GCP in one way or another. We spoke with representatives from Intel, Ancoris, Pythian and Lush to get a better understanding of what they do and how they made use of GCP.

The sessions continued with Machine Learning at Scale: Tensorflow in the Cloud, Machine Learning API’s by example and Transform Retail with Machine Learning: Find & recommend products. Anyone not fussed about machine learning before this string of talks would surely change their mind as these sessions led one to believe that every facet of a software system could, in some way, make use of machine learning to facilitate a more efficient application.

In terms of scale, Google does not disappoint. Over forty five hundred attendees graced the London Next Conference. Googlers presented interesting topics and accompanying technology through GCP. Given that the conference was essentially a two-day marketing event geared towards enticing mid to large sized companies into purchasing licenses for GCP’s technologies, I found the discussions surprisingly accessible, even for someone who does not employ GCP.