Google Cloud Next 2017 Amsterdam

Similar to the summits organized by ‘that other public cloud provider’, Google hosts an event intended on showcasing what’s new in their cloud platform. At Wednesday, June 21st 2017, this took place in Amsterdam.

The event was located in de Kromhouthal in Amsterdam Noord. When using public transport this requires taking the ferry, adding to the vibe of ‘a day out’. Being an old ship construction facility, the industrial setting was a very good fit for the event. Besides that, all aspects of the organization were very professional and top notch.

Adding to the value was the fact that our colleagues Johnny Mijnhout and Istvan Fonay did a presentation on the use of Firebase in the NUsport app (Android/iOS).


Although the diversity of topics was quite large, there was quite some focus on Kubernetes, G Suite, Big Data and Machine Learning. Not suprisingly topics that are Google-specific or topics Google excels at.


Stand-out topic for me was Kubernetes. It really takes Docker to the next level and I’m quite confident that it will find it’s way in many organisations. Mentioned multiple times as a very promising addition was Istio, which enhances Kubernetes by adding policy enforcement, reporting and a multitude of deployment strategies, including A/B testing, canary releases and rolling updates. Things you want but typically require quite some plumbing.

Machine learning

Throughout several presentations the capabilities of Google Cloud Machine Learning were showcased. A good example being the automated removal of Personal Identifiable Information (PII) from chat data. Other samples being Cloud Video Intelligence and Cloud Vision API.


A relational database that is ‘web scale’? One that defies the CAP theorem? That is Spanner. It sounds like magic but it was explained rather well: The CAP in ‘CAP theorem’ stands for Consistency, Availability and Partition tolerance. The theorem states that you can’t have all three, you need to sacrifice one. So, if you ‘sacrifice’ partition tolerance, but have a network that’s so fast and redundantly set up that in reality partitioning will never happen…. then you can build Spanner. (Obviously it’s not available for download).


As customer case of Kubernetes, the NUsport app was presented. This presents users with a liveblog for several sports categories, containing text, images, videos and score updates. This technology was later implemented in the apps (Android/iOS) and website as well, which has an active userbase of well over a million.

The main driver for our NUsport app to use Firebase was the requirement for real-time updates combined with short time to market. We can truly say Firebase allowed us to meet those requirements. Previously we have built similar functionality using Node.js and websockets but, especially considering sports data is ‘same for everyone’, Firebase helps in keeping things simple and focus on features first.

The NUsport app uses three parts of the Firebase platform: Real-time database, Cloud messaging and Remote config. Using it at scale we went through some learnings:

  • Bandwith. Even though a timeline is small (<1 MB), and individual additions a lot smaller than that, having hundreds of thousands of users adds up. So be on top of that. It helps to structure your data with the limiting of overhead in mind.
  • Sharding. For NUsport capacity was sufficient. For we needed to shard to multiple Firebase databases because a single database was not sufficient for the amount of concurrent connections needed.


A very interesting event. Of course there is the ‘we want you to use our product’ aspect, which is why it’s free, but that’s fine. Beyond the commercial part there’s simply a lot of information to digest, also on a technical level. And discussing topics off-track with Googlers or people from the partner companies is inspiring and gives insights as well.

With the seemingly increasing effort Google puts into it’s cloud platform I’m curious how the sizes of the customer base of Google and of ‘that other public cloud provider’ will look like next year.