The Top 4 Public Cloud Myths

Jai Menon
Jai Menon’s Blog
Published in
8 min readJan 2, 2017

The public cloud has been a disruptive force in our industry. More and more workloads are moving to the public cloud because of its many benefits. Current conventional wisdom is that the public cloud will ultimately be the place most workloads will run, and there is no need to look at alternatives.

While the public cloud has been and continues to be an amazing success story, not everything you hear about it is true. Here are 4 commonly held misconceptions.

1. It is easy to run workloads on the public cloud. “I want it to be as easy to use as AWS” is a common refrain. However, public cloud complexity is rising and the statement that the public cloud is “easy to use” is becoming less and less true with each passing day, as the large public cloud providers (AWS, Microsoft, Google) add more services and capabilities at a mind-boggling rate (3 new ones added each day for AWS). It is hard to maintain superior user experience in the midst of such explosive growth in the number of offered services.

It remains true that getting started and building apps on public clouds is easier than the alternatives. However, if you are running a production application and want to leverage the cloud optimally, use its many features and ensure you are getting the best performance and best price, that’s getting harder. As an example, AWS offers 9 different types of compute services, EC2 being one of them. EC2 itself offers 47 different instance types to choose from for your app (with different CPU, memory, storage and networking options). There are also 8 different storage service choices (EBS, Instance storage, S3, Glacier and 4 others), and EBS itself has 4 different volume types (2 SSD, 2 HDD). Some volume types charge by GB used, some also charge by IOPS consumed. You cannot mix and match instance types and volume types willy-nilly — only some volume types go with each instance type. All these choices make it hard on the user to ensure they are picking optimally for their application needs. And if the wrong choice is made, correcting the choice can be disruptive in some cases.

Controlling usage costs on public clouds requires use of monitoring tools like AWS CloudWatch and use of best practices such as — getting rid of unattached EBS volumes after the original application goes away, deleting older snapshots on a regular basis, releasing disassociated IP addresses, upgrading instances to newer ones, rightsizing instances and volumes, starting and stopping instances on a schedule and using reserved instances for 80% or more of your needs.

In addition to compute and storage services, AWS also offers 6 database services, 5 migration services, 5 networking services, 6 developer tools, 9 management tools, 9 security services, 9 analytics services, 4 AI services, 6 mobile services, 3 app services, 4 messaging products, 2 business productivity apps and 3 IOT offerings as of the time of this writing. And, as I said before, they are relentless in adding new services at the rate of 1000 per year or 3 per day, making it hard for users to keep up with the constant change. These same points can be made for the other public clouds to a lesser extent. All this is making public clouds more complex. The 2016 Future of Cloud Computing Survey indicates that complexity of cloud has risen as an inhibitor compared to their earlier surveys, and sales through indirect sales channels and VARs are up 3X from last year as customers increasingly go through intermediaries to help them navigate their journey to the public cloud.

2. Public cloud is cheap. Comparisons of public cloud TCO to TCO of on-premises infrastructure (on-prem for short) are complicated and depend on lots of factors that are customer and use case specific. The type of infrastructure assumed for on-prem deployment significantly affects cost. Modern on-prem infrastructures such as hyperconverged (Nutanix, Simplivity) or emerging on-prem app-centric, composable cloud platforms that offer usage-based pricing and application services (Cloudistics) have significantly lower capex and opex compared to traditional 3-tier infrastructures with separate SANs, and will compare more favorably. For the public cloud, the types of cloud instances used, the types of EBS volumes (Provisioned IOPs are expensive) used, and whether one is using reserved instances (which can be 75% cheaper) are all factors that determine how the cost equation plays out. The newer serverless architectures (called Lambda in AWS) can cut costs further (as much as 60% lower).

I have been involved in several of these comparisons between public clouds and on-premise infrastructures for different customers, and making sure these comparisons are apples to apples is hard, particularly in trying to properly account for operational costs. Nevertheless, in my experience, I have not found public cloud to be cheap (even when it is cheaper) compared to modern hyperconverged or superconverged cloud platforms, except when the # of VMs required is small. Furthermore, without careful monitoring and constant vigilance, it is easy to have significant additional and often unaccounted for costs from unattached EBS volumes, stranded snapshots, unused assets, use of older instances and use of overprovisioned instances and volumes.

Lower cost used to be cited as one of the top reasons for going to the cloud 5 years ago, now agility and scalability are the top 2 reasons customers cite for going to the cloud (see 2016 Northbridge future of cloud survey).

3. Everything will eventually go to the public cloud. There is a sense in our industry that all the momentum is one way — new workloads being created directly for the public cloud, and legacy workloads migrating from on-premises infrastructure to the public cloud. If one were to follow this logic, it’s simply a matter of time before all workloads are running in the public cloud.

Current estimates are that anywhere between 10%-25% of all workloads run on public clouds today. Based on publicly available material, VMware (which is the most pessimistic) thinks 50% of workloads will be in the public cloud by 2030, Huawei thinks 85% of workloads will be in the public cloud by 2025 and Lenovo thinks that 50% of workloads will be in the public cloud by 2020 and 85% of workloads will be in the public cloud by 2025. In summary, everyone agrees there is rapid movement to public cloud, and most believe that, eventually, practically all workloads will be running on public clouds.

However, consider the following points.
(a) The emerging workloads of the future will need significant real-time processing at the edge (edge computing) not in the cloud (cloud computing) — self-driving cars and drones cannot afford to send image data to the cloud and back for processing — it will be too slow and need too much core/backhaul bandwidth. If the 13,000 taxicabs in NYC were all self-driving, at 1 Gbit/sec per self-driving car, 13 Tbits per second or 26 Exabytes of data per year would be generated — way too much to be sent to the cloud for processing! There are already more Mips at the edge than in the cloud today — and I expect more Mips added to the edge than to the cloud in the years ahead.

(b) On-prem infrastructures have improved significantly in the last few years with the rise of agile and economical hyperconverged and superconverged systems — previous comparisons of public cloud to expensive and inflexible 3-tier siloed infrastructures need to be revisited. Even newer on-prem app-centric scale-out cloud platforms that support public cloud like usage-based pricing and higher-level application services are also emerging, providing a public cloud experience on-premise. Some of them also eliminate the need for the customer to own and manage the infrastructure. Gartner says the on-premise hyperconverged systems market grew 79% in 2016 and estimates from multiple analyst firms are that it will grow at anywhere from 45%-60% CAGR between 2016 and 2022. It appears that attractive public-cloud-like on-premise options are going to be available for those that do not wish to move to the public cloud, or for those that want to move back from the public cloud. This is particularly true for the 75% of the workloads in the world that have fairly predictable IT resource requirements (the remaining 25% of workloads are spiky and unpredictable and are clearly better suited for the public cloud).

© The industry is seeing some fraction (small: 10–15%) of customers who are moving back to on-prem infrastructure after giving the public cloud a try. This is sometimes called “unclouding”. They are coming back for reasons of cost (Dropbox), performance (Instagram — Jay Parikh of Facebook which acquired Instagram said at GigaOM 2015 that they only needed 1 on-prem server for every 3 servers on AWS and that their upload times improved 80% once they moved on-prem) and governance/control (issues that were not important when the company was small, but became important as they became larger).

(d) Some customers will never move to the cloud for reasons such as personally sensitive data (Workday said this at GigaOM 2015), too much data to move to the public cloud (a single MRI image can be anywhere from 5 to 25 GB per subject including both unprocessed and preprocessed data) and other reasons such as high performance needs, data sovereignty, regulations, perceived loss of security, use of legacy applications and fear of lock-in (the 2016 Northbridge future of cloud survey shows lock-in rising as an inhibitor to public cloud).

(e) For customers that do not want to deploy on-prem infrastructures, other alternatives such as co-location and cloud hosting are seeing steady growth.

For these reasons, particularly the first two, I don’t see more than 50% of workloads ever running in the public cloud.

4. Long term, there will be no more than 5–10 public cloud providers. The prevailing wisdom is that ultimately there be a relatively small number of big public cloud providers who will kill off everyone else — the few remaining ones being the likes of AWS, Microsoft, Google and maybe a few others.

I think many people ignore SaaS providers when they say this, because it’s clearly not true in the SaaS market. Many people do not even list Salesforce when they talk of the big public cloud providers yet, by revenue, it’s larger than anyone else except for AWS. So, what they really mean is that there will be very few IaaS vendors long term.

However, even for IaaS, I believe there will likely still be 100s of providers that will be able to successfully compete with AWS, Azure and Google Cloud Engine, and carve out a niche for themselves, for the following 4 reasons:

a. They can be simpler than AWS and the other big cloud providers by focusing on a basic set of services. As AWS and others relentlessly introduce new services, these smaller competitors can compete by providing a superior user experience. We are already seeing this with Digital Ocean and Linode among others.

b. They can have better domain expertise in a particular vertical, such as healthcare, or in a particular application area.

c. They can be regional, support data sovereignty requirements of many companies, and play to customer’s fears of storing their data in US or China based companies where Governments might have access to their data.

d. They can leverage modern scale-out application cloud platforms and not have to develop their own special hardware and infrastructure software in the way the Big 3 have had to do.

In summary, the public cloud has many benefits and getting started and building apps on the public cloud continues to be easier than many alternatives. However, not every workload belongs in the public cloud. For many workloads, other alternatives such as edge computing or modern, on-prem, app-centric cloud platforms may be better. I personally don’t believe we will ever see more than 50% of workloads running in the public cloud.

--

--

Jai Menon
Jai Menon’s Blog

IBM Fellow Emeritus, Former IBM CTO, Former Dell CTO in Systems. Forbes Technology Council. Chief Scientist @Cloudistics. Technologist, Futurist, Advisor.