5 Essentials for Service Providers to win in the Cloud Market

Kamesh Pemmaraju
Cloudel
Published in
6 min readMar 27, 2017

By various estimates, 50% of all customer workloads will move to public cloud in the next 5 years. For Service Providers, this shift in the market has extraordinary business and competitive implications. The big question is how they can position themselves to compete and win against the “big three” public IaaS companies i.e. Amazon Web Services, Microsoft Azure, and Google Cloud.

Playing a price war game is clearly a losing choice. So, what does it take to successfully compete and win in the cloud market?

Two simple answers:

1. Get the same operational efficiencies and automation as public clouds
2. Provide the best of both private and public clouds to customers

Let us look at 5 essential ingredients that service providers need to be successful. All of the these fall into one of the two requirements stated above.

Lower Infrastructure Costs by Removing Silos

The public cloud market has been fiercely competitive over the past few years and has been subject to frequent price wars. But most of these price reductions has been simply a consequence of Moore’s law: relentless reduction of compute and storage prices year over year. However, public clouds vendors did not pass all of those savings to their customers, thus enjoying fat margins.

In fact, service providers can achieve the same cloud economics as the public cloud vendors and can provide cheaper offerings (compared to public clouds) to their customers while still enjoying a healthy margin themselves.

On the hardware side, here is a simple comparison for you to consider when it comes to the cost of a physical server you can buy vs. the what the public cloud vendors charge. It is well known that one can get a high-end server these days for less than $10,000 with 512 GB RAM and 6 TB of SSDs. This cost is amortized over 3 to 5 years. A VM with a similar configuration on a public cloud will cost more than $1000 per month. So the cost of hardware, even with the electricity, cooling and rack space, can be recovered within a year.

One of the biggest components that pushes up the cost of managing and maintaining the infrastructure stack is investing in hardware or solutions that create silos. Service providers can further lower their costs and remove silos using hyper-converged infrastructure based on industry standard servers. They can keep costs under control by using scale-out cloud designs that make it easy to start small, grow based on demand, and stay close to the right size and customer demands.

Lower Your OpEx: Automate Operations, Monitoring, and Patching

Service providers need to have complete visibility and control of their entire stack from the infrastructure up to applications. They need intelligent software to monitor the hardware and software stack, manage large-scale clusters, and automatically handle routine — but time-consuming and complex — operations such as failure handling, patching, security updates, and software upgrades.

Running a service with a traditional sys admin teams who execute the above activities manually becomes expensive — especially if they operate in server, storage, networking, and security silos — as the customer demand and number of services and projects grows. Furthermore, the more load is generated by the system, the more people you need. This is inherently not scalable nor cost efficient.

Service providers should look for solutions that provide cloud-based monitoring and advanced analytics that dramatically reduces the need for experts of different parts of the infrastructure, scales linearly as the size of the operation increase, and cuts operational complexity by 90%. This will help service providers improve their margins and manage customer SLA’s better.

Optimize Resource Management

The more service providers can optimize resource usage and capacity based on current and future customer demand, the better handle they will have on availability and performance they can deliver to their customers.

A lot of this comes down to capacity planning, utilization monitoring, right-sizing of workloads, demand forecasting, and detecting zombie VMs and unused resources.

Demand forecasting and capacity planning can be viewed as ensuring that there is sufficient capacity and redundancy to serve projected future demand with the required availability. Capacity planning should take into account organic growth which stems from natural service adoption and usage by customers and having intelligent predictive analytics and machine learning here can greatly help with accurate forecasting, alerting, and providing lead time for acquiring additional capacity.

Better insights into how the infrastructure is performing can also help in fine tuning performance of end user workloads. For example, an intelligent system that is monitoring a workload for storage performance might recommend using SSD’s instead of spindles to increase the IOP’s and improve workload responsiveness.

Integrate with Public Clouds for Flexibility and Choice

At the end of the day, there are some use cases where customers should be using a public cloud. For example bursting for a small period of time based on workload. Service providers need to adopt a platform that allows them to burst out to cloud by providing seamless migration to and from public clouds. This will give customers a peace of mind that they are not locked into one type of cloud solution and have the flexibility to choose either based on the use case.

Differentiate, Differentiate, Differentiate

To stay relevant in this space service providers must differentiate themselves. Whether it is ease of use, specific focus on a vertical, or on developers, they must find a way to stay focused and differentiate.

One way to differentiate is to help companies build software applications with easy to use self-service interfaces via API’s or CLI’s that abstract the underlying complexities of the infrastructure, thus allowing developers to get their job done faster. Developers should be provided with pre-built application templates for popular software bundles such as databases, middleware services, messaging service etc. Developers should be able deploy these applications with a single click to improve productivity.

ZeroStack as a Hosting Platform

ZeroStack focuses on using intelligent software and Artificial Intelligence technologies to create self-driving clouds that are easier to host, deploy, and use.

For the first time, hosting companies, MSPs and regular cloud providers can get a cloud without having to deal with all the operational complexities and error-prone tasks like failure detection, handling, patching, upgrades and so on. ZeroStack’s smart software can quickly convert commodity hardware into a self-driving cloud to lower costs and improve margins.

To learn more, join Ajay Gulati, CEO and co-founder of ZeroStack at Hostingcon to see a demo of ZeroStack’s smart platform.

Read more about the ZeroStack hosting platform benefits at this link: https://www.zerostack.com/use-cases/hosting/

--

--