The punctuated equilibrium in cloud?

My colleague swardley uses the punctuated equilibrium model in his Wardley mapping framework to suggest how we have states of peace, war and wonder.

In regards to cloud, the big vendors have been fighting it out and we’ve had the benefit of rapid innovation and a series of higher value disruptive technologies, ready for immediate integration, coming thick and fast. It got me wondering if we’re about to go into a new stage in the battle of the cloud giants, and if there are any weak/strong signals to suggest this.

Google has caught my eye repeatedly recently. Generally seen as strong in the consumer market and late in Enterprise cloud, what is their game plan?

In my previous blog I wrote about the optionality through portability that is provided via the mature (relatively to the competition) docker orchestration tool, Kubernetes. Out of this cloud war, we’ve not just benefited from great tools, but the architectures by which we use them have evolved also. swardley talks about the co-evolution of practice, where we have traditional scale up architectures for traditional infrastructure services, and scale out architectures for modern cloud services. Through these modern architectures we have seen the rise of CI/CD as we can deploy at speed and easily replace small micro-service applications with no down time. Containers are the deployment method that makes this all even easier, and allows you to work abstracted away from the infrastructure provider and let the orchestration tool do this work for you.

This is all incredibly valuable stuff, and yet Google open sourced Kubernetes. Couldn’t they have made tons of money by licensing it? Well probably, but would it have got the level of adoption quickly enough to become the major player in the container management ecosystem? Probably not.

swardley calls this ‘context specific game play’, and indeed, opensource code is a specific game play technique we’ve observed being used again and again, the most obvious being Linux and it’s quest (in different flavours) to unseat Windows as the defacto operating system.

By accelerating cloud portability through the maturity of container orchestration, they have given us choice to land our containers on our preferred platform. IaaS, generally being that platform, is mostly undifferentiated technically, especially when you look at the requirements for containers. So Google created a granular pricing model more befitting the elastic and ephemeral nature of modern micro-service applications, dropped their prices, and become the cheapest place to run exactly the type of containerised applications Kubernetes was designed to orchestrate.

So Google now has a seat at the cloud table, and Kubernetes was a strong play by which it remains relevant. But it hasn’t stopped there. Using the same technique, it open sourced tensor flow. Other deep learning frameworks are available, and yet tensor flow is generally seen as the most mature. They have used the open source ecosystem to quickly commoditise it’s usage and make it mainstream, sounds familiar. But to what end?

Just this week, they announced the availability of the TPU v 2. This specialised chip, offers significant performance improvements over the previous TPU version, but more importantly, over GPU and CPU’s. Oh, and they are only available in the Google cloud.

In simple terms, if you want to do deep learning, which requires lots of labelled data sets and processing capability to train your algorithms, then Google have just made themselves the place to do this, oh and chances are, if you’ve already been in this domain for a while, there’s high chance you’ve been using tensor flow as the framework, so can immediately start to benefit from the TPU chip architecture.

Earlier this week I re-tweeted a blog from Adrian Colyer where it is observed that for the most part, public cloud IaaS usage is still very much in the domain of the lift and shift, as opposed to transforming apps into modern architectures. Now, you could easily dispute the data, it’s a small sample from a single cloud provider, in a single region. But it is at least useful to spot a trend. IE there is still a limited amount of modern architected applications out there in the wild.

With Google, by revenue, seen as the distant 3rd place runner in cloud, but with the majority of consumers still using public IaaS for lift and shift, they have got a good head start on being the cloud of choice for your containerised applications. We’re yet to see any volume of these hit the cloud, but if/when we do, it could mean a significant revenue swing in Googles favour. But they certainly haven’t put their eggs in one basket, and with their maturity in hardware design and software frameworks in the deep learning space, we could see these workloads start to pile up on Google Cloud pretty quickly.

I’ll be watching this space to see what sorts of game play the other vendors start using to ensure Google haven’t already monopolised the future cloud space, as ever your comments or insights are much appreciated.