Drivers for Cloud Migration

Tushar Agarwal
GlobalLogic Cloud and DevOps Blog
6 min readJul 5, 2018

The can be many reasons why an enterprise might look at cloud adoption. These range from business drivers such as digital transformation, to operational goals such as cost reduction. Based on our experience with enterprise migrations, we have found that they can be categorized under the categories in the following sections. Note that while these are broad categories, enterprises can have more than one cloud migration driver.

Hardware End-of-Life

Traditionally, enterprises running their own data centers usually design, size and procure hardware for specific applications based on a Bill of Material (BoM). This includes compute (servers, blades), storage (Direct Attached Storage, SAN), networking (switches, firewalls) and other specialized equipment. Hardware purchased can have a lifespan of 3–5 years. Intel and other chip makers have a tick-tock cycle of 18 months (tick= new generation processor, tock=clock speed improvements). This results in hardware vendors typically releasing new generations of their hardware every 3 years.

Companies will usually retire and refresh the hardware with the latest generations to take advantage of the performance and efficiency. Such hardware refreshes have to usually be bench-marked and sized in each iteration. Each refresh requires significant CAPEX investment.

On the other hand, the cloud cost model is purely OPEX based. Customers simply pay for what they use, and can release resources that are not being used. Migrating applications whose hardware is to be retired, is an attractive proposition. It saves enterprises from significant CAPEX expenditures, and allows them to evolve to a more beneficial OPEX model.

Conversely, if an enterprise has made recent significant CAPEX investments in hardware, the value for migrating these applications to the cloud may be low for the time being. In such cases, enterprises can look at cloud for capacity expansion.

Cost Savings

Besides the cost savings that can be achieved by moving from a CAPEX to an OPEX model (described in the previous section), there are opportunities for significant cost optimization in day to day operations. When hardware is purchased, it comes with a fixed capacity (compute, storage, networking). Usually, enterprises will size and procure hardware to deal with peak loads. The downside to this is that outside of the peak loads, the hardware is not fully utilized and thus not giving a full ROI.

Enterprises can use private cloud for capacity management at the application level, but at the data center level, the peak load problem still remains, along with potentially unused capacity.

Public cloud providers like AWS, Azure and Google, on the other hand provide pay-as-you-go models with minute or (more recently) second level billing models. You only pay for what you need, and release the resources when you no longer need them. There are no upfront investments.

Besides that, as application load changes, customers can resize (or rather, rightsize) the resources within minutes to match the load. All public cloud providers provide APIs for resource management and operational insights. This means that right-sizing operations such as scaling in, scaling out, up and down can be automated based on system triggers. Concepts like autoscaling allow applications to scale automatically when the load increases and reduce allocated resources when the load reduces.

Further, cloud providers also provide significant discounts for sustained use (instance reservations, sustained use discounts, etc.). So how do they manage their excess capacity? Some cloud providers have created marketplaces for their excess capacity known as spot or preemptible instances. These are available at significant discounts but for a limited amount of time. A lot of customer find these useful for applications with dynamic loads.

Time to “Go-Live”

In today’s dynamic business environment, cost is only one aspect of hardware game. The other crucial aspect is time. In a traditional enterprise IT environment running in a private data center, once a BoM is created and an order is placed, it can take anywhere between 6 weeks to months for the hardware to arrive. Once it does, the data center staff need to rack and cable the hardware (assuming the power and cooling have already been planned for during design). This adds a significant lead time to the process of application going live. If an application is being built in-house, the hardware procurement cycle also has to align with the release plan. If an application is delayed, the hardware sits idle. If hardware procurement is delayed, then there is business impact of the application not going live as expected.

In the cloud, this problem goes away, as any resources can be provisioned within a matter of minutes. Hardware architecture and sizing can change and evolve to keep pace with the application needs. Applications can be constantly tested and benchmarked against the hardware using DevOps best practices to ensure that the application and hardware are optimized, both before and after Go-Live.

Data Center Contract Expiration

Not all enterprises manage their own data centers. Some enterprises choose to host and run their workloads in third party colocation data centers (colos) that provide rack space, redundant power supply, cooling, and network connectivity. In such cases, enterprises enter into multi year contracts with the colos that are renewed periodically. Colo costs are a non-trivial part of the overall IT budget for an enterprise.

Some enterprise start exploring cloud as an alternative to renting colo space. In such cases, the migrations can be aligned with the colo contract expiration. Depending on the hardware footprint and design complexity, enterprises can start planning such migrations from months to a year in advance.

Capacity Expansion

These days, businesses (especially online businesses) can see a surge in business due to things like viral marketing campaigns. For such businesses running their infrastructure in a data center, the need to expand capacity temporarily during business peaks can make cloud an attractive option for them.

On the other hand, enterprises running their own data centers may see the need to expand capacity permanently. But instead of building out or renting more data center capacity, the enterprise may choose to move new applications to the cloud, while running the existing applications on-premises till the hardware becomes end-of-life. This is usually part of a larger cloud migration strategy towards a hybrid or a full cloud model.

Digital Transformation

Many enterprises starting a digital transformation journey, modernizing their legacy applications and deployment processes to take advantages of the more modern technologies such as NoSQL, containers, microservices etc. For such architectures, cloud’s flexibility and scalability often becomes a natural fit. Enterprises usually take one of three paths to cloud in this journey:

  • Migrate underlying infrastructure to cloud and then incrementally modernize the application
  • Modernize the application on the current infrastructure, prepare the application for the cloud, and then migrate
  • Do fresh development directly on the cloud

Partner or Compliance Driven Requirements

Some enterprises often choose to migrate to cloud if their business partners or customers are leveraging the cloud and expect their partners to also scale up to meet increasing, and sometimes viral growth in customer demand. This can often be the case in retail, finance or logistics industries.

Contemporary cloud service providers are often referred to as hyperscale. They operate at a bigger scale than most modern enterprises. As they have to support a wide variety of workloads, they implement necessary security controls to conform to multiple security standards (see AWS Cloud Compliance page). This makes it attractive for enterprises in compliance driven industries to offload part of their infrastructure security to a cloud provider, while focusing on application, data and operational security (see Data Residency with AWS paper). These CSPs also implement security across their infrastructure more cost effectively than enterprises can typically do inhouse.

CSPs also provide important features like DDoS, WAF etc, as managed services, which makes it very attractive for enterprises, rather than investing in own software and infrastructure.

This blog is part of our ongoing cloud series. To find out how GlobalLogic can help in your cloud migration journey, please reach out to us at cloud@globallogic.com.

--

--