A look back at the State of the Cloud, and a few new predictions for 2015
I’m pleased to present a contributed post from guest author and Structure 2015 speaker Adrian Cockcroft of Battery Ventures, who took a look back at his predictions from last year’s State of the Cloud presentation at Structure 2014 before he presents an update in just over a week on the first day of the show, Nov. 18th. Everything that follows was written by him.
In June 2014 I presented “Cloud Trends,” a presentation on the state of the cloud computing industry at the Gigaom Structure conference. At that time I tried to show both the current state of the cloud computing ecosystem as well as provide my predictions for how the industry would evolve over the new few years.
Now, a year and some months later, I’m revisiting those points to see if they stand up to scrutiny. And as cloud computing continues to accelerate the pace of innovation, I also have a few new predictions to share.
Public Cloud Hits the Mainstream
While innovative technology companies have been using cloud computing to support their IT infrastructure needs for years, I predicted that 2014 would be the year of mass public cloud adoption among enterprises; these enterprises, many of which are in more traditional industries such as financial services, would get over their caution and start serious transitions to the public cloud. In October 2014, Gartner’s cloud-computing analyst Lydia Leong tweeted a strong confirmation of this trend:
I also predicted that public-cloud services Amazon AWS and Microsoft Azure would be the safe bets for enterprises making the transition, and that Microsoft would continue to extend support for open-source projects such as Linux alongside the company’s own proprietary software stack. Leong’s 2014 Gartner Magic Quadrant for cloud computing confirmed this, along with the astonishing revelation that AWS has increased its dominance of the public-cloud market dramatically. AWS went from being about five times larger in terms of deployed capacity than all its competitors combined in 2014 (~83%) to ten times larger in 2015 (~91%). While Microsoft is in a distant second place, it is still well ahead of Google and the rest.
There is also a global ‘land grab’ taking place as public-cloud vendors launch more regions and set up shop in more countries. For governments and companies that care about the jurisdiction of their data, a local region helps to validate cloud computing as a viable option. It also boosts adoption more than you would expect. Last year AWS had just launched its 10th region in China and was rumored to be launching in Germany, and it did eventually launch in Frankfurt. Conversations while visiting Germany before and after this occurred confirmed that the “local launch” boosted acceptance and usage of the public cloud among local companies.
Amazon is scheduled to launch its next AWS region in India in 2016. In the meantime Microsoft has taken the lead in regional deployments, taking their own region count from 15 to 17, driven largely by the requirements of their customer base for SaaS applications like Office365. In contrast, Google isn’t playing this game at all — their data centers are located in low cost locations, such as Eastern Asia, Western Europe and the Central United States. The company has not added Google Cloud regions in any more of its data centers in the last year. Google has however changed its zoning and regional model to be similar to that of AWS, with three co-located zones per region; previously the company only had two Central US locations that were too far apart for applications that depend on low latency communications.
In my talk at Structure, I called out Digital Ocean as a high growth cloud-infrastructure supplier to watch and the company has continued its growth over the last year. According to Netcraft, a web-hosting market-share-analysis company, Digital Ocean has increasedfrom the 8th largest provider of website hosting services to 2nd since May 2014. The company has also more than doubled its hosted-sites count. Just this month, Digital Ocean announced it received $83M in new venture funding and that the company currently operates around 10,000 servers.
The Year of Docker
In the platform as a service (PaaS) space, under the headline “Docker all the things,” I talked about how PaaS-provider Cloud Foundry was leading the industry with its market share, but that Docker, an open platform for developers to build, ship, and run distributed applications, was disrupting the entire PaaS stack. Docker is leading this disruption by disaggregating the PaaS layer and effectively commoditizing it into “somewhere to run Docker containers.” Since then, Docker and its ecosystem has continued to grow at a rapid pace. In September 2014 PaaS providers CoreOS and Cloud Foundry tried to pull back from Docker and create an alternative container platform due to concerns that Docker was adding features that competed with their own products. In June 2015 Docker was able to address the companies’ concerns and gather the leading PaaS providers back into a single, container-platform specification called runC. Docker donated the company’s own implementation of runC to a new standards body operated by the Linux Foundation.
In the battle for the datacenter, last year VMware appeared to be threatened by lower cost and more highly automated alternatives from Docker and Openstack. I also highlighted Mesos, a cluster manager that simplifies the complexity of running tasks on a shared pool of servers, as a more scalable and developer-oriented threat to Openstack. VMware has responded by moving to include management of Docker in its toolset and released its own Docker-optimized Linux distribution, called Photon. The company has also built a way to create virtual machines (VMs) in seconds, rather than minutes, and promotes the safety of wrapping containers in a VM, while minimizing overhead compared to other VM technologies. With VMware’s large and established ecosystem, this legitimizes Docker’s technology and will speed up its enterprise adoption. VMware is effectively gambling that by disrupting itself now the company will remain relevant in the long term.
Open-source cloud-computing software Openstack has continued to mature and has become the default environment for many new datacenter deployments, although at a smaller scale than expected. I previously predicted that the Openstack ecosystem would be co-opted by large, enterprise vendors, such as Cisco, HP, IBM and Oracle. Over the last year some of the Openstack and related cloud-based startups have struggled to find a market and then been bought. Cloudscaling was acquired by EMC, Metacloud and Piston Cloud were bought by Cisco, Nebula’s team was hired by Oracle, Bluebox was purchased by IBM, Eucalyptus was bought by HP, and eNovance and Inktank were both acquired by RedHat. The “last man standing” award goes to Mirantis, a cloud-computing-services company that has created a successful business by helping organizations get their Openstack installations to work properly. From a technology point of view Openstack has some challenges: it was late to add developer-oriented functionality and the Neutron software-defined networking project has had problems scaling.
For scalable datacenter deployments Mesos appears to be the current winner. The rival Kubernetes technology from Google is interesting for smaller installations, in which it appears to be a useful layer that sits on Openstack as a Docker runtime. Mesos, however, has production installations at Twitter, AirBnB and others, showing that it can be used to create very large private cloud environments. Mesosphere, the company that delivers Mesos, took its datacenter operating system (DCOS) implementation from beta to production status earlier in 2015.
Docker support in public-cloud environments has also matured, with AWS producing a Mesos-like EC2 Container Service (ECS) and Azure supporting Linux-based Docker as well as creating a similar Windows-based container technology; Google has the Google Kontainer service, which is based on Kubernetes, and Digital Ocean is working with Mesosphere.
Moving up the stack to the application layer, I pointed out last year that software as a service (SaaS) was being adopted as the standard delivery model for applications, that enterprises were adopting it, and that SaaS was enabling rapid global coverage for vendors, since vendors would no longer need on-site installation and support staff in every market. The total dollar value of SaaS is actually much larger than infrastructure cloud. The large SaaS vendors such as Salesforce and Workday continue to do well, venture capitalists continue to invest strongly in SaaS providers, and some of the large, enterprise vendors are undergoing rapid migrations to SaaS. A large proportion of the capacity of Azure is actually there to support the many Microsoft SaaS products, including Office 365. IBM has integrated its acquisition of Softlayer and is using it to support the company’s SaaS offerings. Oracle and HP are building out cloud offerings to support their SaaS application businesses. This lets these enterprise vendors claim a large amount of “cloud revenue” even though their deployed IaaS capacity is tiny compared to AWS.
My final comment from 2014 was to note the rise of Google’s Go programming language. Developer adoption of Go has continued and now almost everything new and interesting is being written in Go. The Docker ecosystem uses Go for almost everything; the same goes for CoreOS and Hashicorp’s suite of tools as well as PaaS offerings from Cloud Foundry and Apcera, and many of the newer SaaS products.
Looking forward, I see Docker maturing to become a standard production tool and expect it to have many enterprise deployments by the end of 2015. This would reflect a very high rate of adoption for Docker containers, since technologies usually take several years to diffuse from companies that are considered early adopters into traditional enterprises. The high rate of adoption will be fueled by demand from both business users, as organizations aim to speed up delivery of new, innovation cloud services (and from the developers tasked with creating these services) as well as by the broad supply of Docker containers from all the vendors in the public cloud, private cloud and datacenter infrastructure markets.
The challenge for Docker is to manage its ecosystem carefully as it grows, and rapidly add the features needed to support production deployments. So far Docker has managed to head off a split in the ecosystem and recently announced a plug-in architecture that lets other companies easily replace, add or extend core functionality.
Looking at the future of cloud technology, in addition to the existing trends mentioned above, a new trend is emerging. The capabilities of AWS, Azure and Google Cloud are now a superset of commonly used datacenter capabilities, not a subset as they used to be. In effect, if you want to create a state-of-the-art datacenter today you can reliably create it in a public cloud, at scale, with very high functionality, in far flung parts of the world, immediately. The downside is that it’s hard for IT professionals, to keep up with all the new capabilities provided by the public cloud vendors. Digital Ocean, however, seems to be the exception, as the company appears to make a virtue of their ease of use and lack of features. As a result, events like AWS re:Invent, Amazon AWS’ annual user conference, have a rapidly growing training and certification component.
And finally, for me, the most interesting new technology in the last year is AWS Lambda. It is a highly secure, event-driven computing model that creates a new container to process each event. Events can be generated whenever data changes, to create a daisy-chain effect that implements a business process. AWS charges for Lambda by the tenth of a second for the time the container is running, and allows for a million requests each month for free.
If you think about how hackers compromise IT systems by breaking into them it is clear that a service that only exists for a fraction of a second is an interesting hardening technique. Along with the AWS identity and access management (IAM) features and secure key management services, every interaction and item of data can be controlled, encrypted and audited. I think that eventually this model will become a best practice to protect the most critical data, and as data centers keep getting hacked, more people will realize that state-of-the-art, highly-secure systems should built using cloud technologies.
Structure 2015 takes place November 18th and 19th at the Julia Morgan Ballroom in downtown San Francisco. Adrian will be giving his presentation early in the day on the 18th, so make sure you get your tickets here.