The Cloud Wars of 2017

Simone Brunozzi
Simone Brunozzi
Published in
11 min readDec 19, 2016
Image credit

Ten months ago I published The Cloud Wars of 2016, which racked up more than 50,000 views on Medium (it’s a LOT of visits for a niche topic like this one) and countless comments and shares, and it also compelled people to reach out directly and ask for my advice, ranging from private equity funds, to tech people, to employees at big cloud players.

This is a short summary of that post:
Most companies pretend to have built a “cloud”, but they haven’t. Cloud Computing, an automated, API-driven, on-demand and pay-as-you-go infrastructure, and on-demand software on top of it, are the true innovation.
Amazon Web Services (AWS), the market leader, needs to expand into the private IT through an appliance.
Microsoft is gaining share and pushing Azure through existing enterprise deals.
Google is waking up, gaining momentum in the public cloud space. They should improve their efforts to get customers and to create the right environment for developers to stick around.
Conclusions: Cloud Computing is changing, money can be made in Enterprise IT, large companies will suffer.
AWS will dominate. Microsoft and Google might be relevant if they stay focused.

We are now in December 2016, and it’s time for a “refresh” post.

A Cloud war?

First of all, why call it a “war”? I said it right a year ago:
Because the lock-in effects of these solutions are so powerful that, once you have moved to one of their “Clouds”, it’s going to be very hard to move elsewhere.
You end up with less bargaining power towards your sole “drug dealer”, which means you can dictate less discounts when trying to negotiate a better deal.

Let’s see what I got right — starting with AWS

Let’s start with the market leader.
As I foresaw a year ago, AWS is indeed trying to penetrate private IT with an appliance — starting with the AWS Snowmobile. (and to a much lesser extent, with AWS Snowball).

Credit: Jeff Barr (Hi Jeff!)

What’s AWS Snowmobile? It’s a Data truck, which allows you to move up to 100 PB (yes, with a “P” — which equates to about 100,000 TB) of data between two Data Centers, at the speed of about 1TB per second (wow!).
AWS states that “Snowmobile is a ruggedized, tamper-resistant shipping container 45 feet long, 9.6 feet high, and 8 feet wide. It is water-resistant, climate-controlled, and can be parked in a covered or uncovered area adjacent to your existing data center.”

Well… But why do you need THIS specific Data truck? Could you just do it with a lot of storage devices? Maybe… But AWS put a lot of engineering efforts in this, and it shows. Existing solutions are just not built for very large scale operations, or they are simply too expensive.
AWS got a few things right, from focus on security, to complete integration with other AWS services (despite I think that Glacier is an inferior product compared to Google Nearline).
At the same time, there are only a few customers ready to take advantage of this. “Change management”, and migrating and updating old IT is going to be more challenging than simply moving things to a mobile Data Center.
And yet, this is a strong first move.

AWS is playing a smart game. Those PB of “old” Data, entangled in “old” Data Centers until now, can suddenly become “AWS-ready”. Nothing now prevents customers from consuming AWS services for these datasets; and nothing prevents them from ditching an old Data Center and move EVERYTHING to AWS.
Maybe these containers will stay next to “old” Data Centers only for a few days, and when loaded they’ll get back to AWS. Maybe they’ll stay there for months, and “extend” a customer’s Data Center until they’re ready to do the move.

This is NOT the BIG “appliance” move I mentioned in my post a year ago, but it’s a strong step in that direction.

AWS in a Box

Building an “AWS in a box” requires much more than this, however — that’s why it can’t happen too soon.
It requires AWS to rethink/redesign their hypervisor layer (they use a heavily modified version of Xen), the network plane, the security model, and many other components… Which takes time, and it’s risky.
They would also need to focus on high performance, because that’s the type of infrastructure they would be competing with.
Try to run Oracle on AWS… And then try to run it on specialized hardware: night and day, although you pay dearly for the current status quo.

As much as I like AWS SnowMobile, however, in order to win the majority of private IT spending, AWS needs to bring the “iron” to their customers, and not the other way around.

(Image credit)

Banks, financial institutions, etc, are currently not using AWS (for the most part) because it doesn’t fit their model. They want complete control, they want to keep using most of their Data Centers to keep delivering on their compliance and regulation and data residency needs, and so on.
I still think that AWS will build an “AWS in a box” eventually… Unless Google beats them at this game. But before we dig into Google…

What about Oracle?

This is what I wrote ten months ago:
If your “Cloud ERP” allowed me to scale my capacity on demand, and pay only for what I use, and security patches and updates would be automatically applied to my systems without breaking anything, Oracle would be out of business and a lot of customers would be happy.
Oracle doesn’t do that, because it would kill its main business, which still sells licenses year over year.

And in fact, their recently announced “Cloud” offering (at Oracle OpenWorld 2016, the best oxymoron ever invented) is NOT a real Cloud offering. It would kill their business. Oracle is not bold enough… And maybe they don’t need to be. It’s easier for them to defend the fort, rather than fighting the enemy on their preferred ground.
I don’t expect Oracle to do any significant moves in 2017.
They’re a force to be reckoned with; they OWN their customers, and their lock-in is legendary.
They can still choke them and force them to eat their own version of a cloud, and buy some time.

Microsoft

A year ago I said:
Microsoft will simply try to beef up their offering, and sell the “hybrid cloud” story to as many enterprise companies as possible.

And this is exactly what has happened (and yes, it was easy to predict).
What will Microsoft do now?
They’ll make sure that the Azure Stack becomes solid and reliable. Keep investing in educating customers, and feeding the partner ecosystem with great deals and discounts; and try to gain market share.
Azure is not doing too well on the hiring side of things… As an example, Brendan Burns spent only enough time to sip some espressos before leaving to start a Kubernetes-related startup (correction: Brendan Burns is still at Microsoft — my source was wrong!) Not being able to retain top level talent is a bad sign.
Compare that with AWS’ ability to hire Adrian Cockcroft. He’s so good that he doesn’t even need an introduction.

Adrian Cockcroft (image credit)

If a year ago the market share for public cloud was 90% AWS / 9% Azure / 1% GCP (Google Cloud Platform), in a year from now it might get to 87%/10%/3%. Not a big change… But change takes time.
And comparing AWS and Azure today is still mostly a clickbait thing. AWS is far superior. Microsoft will need to address it. Like NOW. Like FOR REAL.

The 20 Billion dollar wall for AWS

AWS is currently grossing $13B a year. By approximately August 2018, based on some back-of-the-napkin calculations (whose details I won’t share with you), they will hit $20B.
Their growth will start to show evident signs of slowdown, because:
1) They are quickly saturating their primary market;
2) They are not building services that can attack new markets (or segments) quick enough.
3) Containers adoption will make it easier for customers to find Azure/GCP offerings attractive, especially in 2018 and beyond.
4) Microsoft, and especially Google are aggressively attacking them on pricing.
5) Despite winning Adrian, AWS is losing talent left and right, primarily to Google.
6) Their ecosystem is scared; every six months, several successful startups are “killed” by a new service launched by AWS, and some of them saw AWS as a partner just before the killing blow. It means that particularly Microsoft will be able to attract them and use them to push Azure into enterprise accounts more and more.
7) I won’t comment about the recent AWS/VMware partnership (disclosure: I was employed at VMware until early 2016; I also don’t have any non-public information about that deal), but let me just say that in my personal view, it is not as strong of a partnership as it looks like on paper. Yaron Haviv, one of the most technologically-savvy CTO I’ve ever met, has an opinion about it.

Google / GCP

Last year I was a bit bullish on Google, and… Ops, let me call them GCP, praying that they will eventually drop the acronym and pick a proper name… And perhaps even spun out and become a separate Alphabet company, like Waymo just did.
I was saying, I am bullish on GCP because they are acquiring talent from AWS, they have a superior technology (particularly the backbone, and their long term experience with containers), a lot of traction for Kubernetes… And a huge desire to become relevant.
They are building a strong team, with focus on enterprise customers. The leadership team seems committed to GCP for the very long term.
Kubernetes is their Wayne Gretzky’s “I skate to where the puck is going to be, not where it has been.” — Linux containers will become prevalent in the future, and GCP wants Kubernetes to (continue to) be the King of Orchestration.
2017 will be their “test” year: if they will get a few more great “logo” customers (like SnapChat or Evernote, or possibly a large financial institution and/or an oil/gas/manufacturing giant — I would bet on Dropbox and (more) Apple if I were them), then it will be only a matter of time before their technical superiority starts to pay off in the market.
If I were an investor, I would be paying a lot of attention to GCP — as it will affect how AWS and Azure will perform in the long term.

Problems still to be solved

In 2016, the main problem with Cloud was… Complexity. In 2017, it will still be the same. And in 2018.

What do I mean by complexity?

  1. It’s hard to plan properly, when what you’re doing is mostly catching up with new technology, and paying the interests off your technical debt. The choices in Public Clouds are simply too many.
    Public Cloud providers should rethink how their customers consume their services, and operate their cloud. Right now it still feels like using a bunch of individual web services, loosely coupled together, almost as if different companies developed them — but too often, especially when troubleshooting, it’s hard to figure out how to solve the problem with the tools provided. It’s also hard to have the same telemetry and control that companies are accustomed to in private IT environments.
  2. It’s hard to train or retrain your workforce (that’s why I helped start, and invested, in Cloud Academy, for example), especially when the speed of evolution for technical services is accelerating. This will remain a big issue for years to come. Large customers also don’t like to “adopt a single religion” when it comes to Cloud, but now it’s particularly difficult to learn “how to build a distributed system” without learning the details of a specific Cloud platform; most of these specifics wouldn’t easily apply to another Cloud platform, and this is a problem.
  3. On Linux Containers, let me say that AWS’ Blox failed to excite me — it seems a timid attempt to undermine Kubernetes’ growing success, but it takes much more than that (for once, there’s a Reddit discussion about it which is worth reading). I suspect that Adrian Cockcroft will play a huge part in how AWS’ open source efforts will play out, particularly in relation to Linux Containers. And yet, for most companies, it is still unclear how Container technology will be applied to their existing IT. I doubt that Mesosphere has the right combination of skills and tools to succeed there. HashiCorp seems a much more interesting choice, if they can bring integration to the huge set of individual tools that they’ve launched so far, and assuming that they can find a business model that can work with open sourcing so much of your IP (Intellectual Property).
  4. Open source has an issue. It’s obvious that Cloud Providers can monetize Open Source tools by selling you the infrastructure (e.g. Amazon RDS) and/or enhanced capabilities (e.g. Amazon Aurora), but in return they don’t contribute enough to the project itself. That’s both unfair, and it’s going to diminish the quality of open source tools in the long term. The same Yaron Haviv that I mentioned earlier has something to say about this too.
  5. Last, but not least, it’s clear that most companies will need to operate multiple environments (private IT; private Cloud; public Cloud A; public Cloud B) and multiple technologies across the board (VMs and Containers; Linux and Windows).
    This is also why (small shameless plug here) a year ago I joined MosaixSoft — we are building a system that essentially helps you handle the complexity of multiple systems, using a novel approach. </shameless plug>
    Why do companies NEED multiple environments? Well, they have no other choice. They cannot abandon their “old” IT fast enough, but they HAVE to adopt Public Cloud for serious projects. The result is simply that for the next few years, you will keep hearing “Hybrid Clouds” and there will be a few different definitions. Remember when AWS would never use the term “Hybrid Cloud”? Well, times are changing.

Conclusions

I observed the big Cloud players in 2016, and it has been really interesting. The “Public Cloud” technology has clearly reached a point of maturity that makes it a very compelling “additional” option for large IT environments.

In a year from now, I expect that AWS’ current domination will start to be questioned; I expect Microsoft to get most of the attention; and I expect GCP to start warming up the engine to become a relevant player in 2018 and beyond, with possibly the ability to seriously challenge AWS’ penetration in green field IT or startups, more than Microsoft.
I also expect a blood bath. One, possibly two, very large, successful, publicly-traded IT companies will start suffering a lot from both the success of AWS, and the effect that AWS has on existing incumbents.
This might be the subject of another blog post soon.

Retirement coming for some traditional IT companies (image credit)

[disclaimer: I have worked for Amazon Web Services from 2008 to 2014, and for VMware from 2014 to 2016. The opinions expressed above are mine, and they do not represent my past or current employer’s views, nor do I share sensitive and/or confidential information that I have obtained during my past service at these companies.]

--

--

Simone Brunozzi
Simone Brunozzi

Tech, startups and investments. Global life. Italian heart.