Migrating Legacy Applications to the Cloud

Introduction:
We have been on a quest to build better systems. On-demand, virtualization had led to the rise of the Cloud Computing in the last decade or so. Cloud computing is a powerful transformational change with real& substantial benefits like reduced infrastructure cost, elasticity, scalability, better reliability, etc. Cloud Native Applications are designed to exploit these advantages.

That said, there is a large volume of Legacy applications in existence today. These systems have been running for decades and enterprises have used them to serve critical business needs. They are hosted on-premise, in organization’s datacenter(s). Typically, it is a challenge to keep the legacy application running because they use outdated technologies. There are several reasons, why it would make sense to upgrade these systems and migrate them to the cloud. Some of them are:

  • Support for Modern Technologies: As mentioned earlier, these systems tend to use outdated technologies, which are not likely going to be supported in near future. Thus, there is no guarantee that these systems can be operated reliably in future. Security is another major concern as these legacy systems are not likely to get security updates in near future. Moreover, it is hard to find skilled manpower versed in underlying technologies used by these applications.
  • Elasticity support: Consider a scenario where load of a system is spiked during some periods of time (e.g. during thanksgiving holidays for most e-commerce sites) but otherwise the load is moderate. This scenario requires organization to keep buffer capacity which is not likely to be used during the entire year except holidays, but still costs money to maintain. Capacity planning is a challenge on most on-premise systems including legacy systems. Cloud’s “Pay-as-you-go” model eliminates this problem.
  • Geographic scalability: Further, if an organization wants to expand into new geographies, it is very cumbersome to do so with these legacy systems. It would involve huge capital costs to build physical infrastructure required to setup & run these legacy systems. Move to the cloud results in capital cost being converted to operational costs and eliminates the need to maintain expensive datacenter, equipment along and the manpower to support it. New virtualized infrastructure can be provisioned on-demand in different datacenters across the globe in few clicks.

While it may seem that the best path going forward is to migrate these legacy applications to newer platforms, it is far easier said than done. Modern enterprises have large and complex IT infrastructure and it is very risky, not to mention expensive, to replace a fully working system with a new one. Often these enterprises defer the task of migrations until a point where the system stops functioning. This is in general a bad strategy as by then the need replacement becomes imminent while newer system is years away from operation.

The approach that works best in practice is planning followed by slow & gradual transition to the cloud. This article discusses the details of this approach. It should be noted that whatever is discussed here is intended to serve as guiding principles only and should not be interpreted as hard-and-fast rules. Each legacy system is different and hence the migration process needs to be tailored to the specific need of the organization, which would depend on unique challenges & constraints that exist in the organization.

Overview of Migration Process:
We have been taking advantage of advances in technology to migrate our systems to more capable platforms since ages. The fundamental process around migration involves understanding the benefits of the new technology, assessing gaps in the existing system, plan and migrate. Modern enterprises have large and complex IT systems with lot of interconnected parts. Prospects of moving such an application into a newer platform can be intimidating because of the sheer magnitude of change involved.

Migration is complex & scary thing, but it needs to done. The approach needed is to chip away small parts of the system and move them to new platforms. We can conveniently divide the migration process into three distinct phases, which are:

  • Discovery & Planning
  • Application Migration
  • Verification & Operation

This migration phase follows an “Agile” model as opposed to a “Waterfall” model. The entire migration process should be divided into small cycles of work like “sprints” in agile methodology and all the three phases should be done repeatedly for each sprint. It should be noted that in contrast to Agile methodology, where the typically duration of a sprint is few weeks, in legacy migration the duration of each cycle is generally few months.

Vendor Evaluation & Selection:
Before we talk about migrating our legacy application, we need to decide where will we migrate to. We have a lot of choice of cloud vendors in the market today, the dominant ones being Amazon AWS, Microsoft Azure, Google Cloud Platform and IBM Bluemix.

Enterprise should choose the vendor that makes the business sense for it. For example, if the organization is running Sharepoint, Exchange server, etc., probably Azure makes the most business sense.

Regardless of the choice of vendor, it is always a good idea to avoid vendor lock-in. There are various ways to do so. Modern architectural patterns like Microservices can be used and each Microsoft should be packaged into dockers/containers that can be easily migrated across different vendors. For applications that are going to be rearchitected, layered design should be used where each layer has a well-documented interface. All platform specific features should be kept in separate layer away from business logic. This has the additional benefit that if the application needs to be migrated to a new platform, only that layer needs to be re-implemented and the rest of the application would work just fine.

Open-source container-orchestration system like Kubernetes can be used for automating deployment, scaling and management of containerized applications, which most public cloud vendors do support.

Discovery & Planning:
First phase of migration process is to understand the IT portfolio and interdependencies that exist with it. It could be cumbersome task to do this as documentation is scattered all around and some of the systems might not be documented at all. As these systems are very old, people who have knowledge about them have long left the organization. There are plenty of “I didn’t know, we had this also!” moments that usually happen.

As migration is best done in agile manner, our overall goal is to identify smaller subsystems that can be migrated in isolation. This might sound trivial but often it is not, as there are strong interdependencies between various IT components. Most IT systems can be very rigid, where removing one component can lead to breakdown in unexpected parts of the system elsewhere. Unfortunately, there is no easy solution to this problem, but nevertheless it needs to be solved.

Generally, the best way to go forward is to identify component(s) that are simpler or less risky to migrate, as it would be easy to accomplish the task of migration and would generate some positive reinforcements or “quick wins”. As the migration process continues further, it would develop confidence within the organization along with necessary learnings and eventually more complex components can be tackled. Towards the end of discovery & planning phase, we would have identified subsystems we need to migrate one-by-one.

Application Migration:
We discuss 5 most common application strategies for migrating application to the cloud. They are:

  • Lift & Shift: This strategy involves moving an application to the cloud without any modifications. For example, an application running on a physical server can be moved to a virtual server in the cloud, pretty much as it is. The complexity of an application is a key factor in the decision whether it should be lifted and shifted or re-architected. Commercial, off-the-shelf applications and apps with easily defined patterns are often good candidates for Lift & Shift.
  • Replatforming: Replatforming involves upgrading an application from its existing platform and adhering to a new platform, while preserving existing functionality. Not all applications need full benefit of being Cloud native. Common examples include moving a WebLogic (which requires expensive license), to Apache Tomcat, an open source equivalent. Packaging applications running on physical hardware to containers would be another good example in this category.
  • Outsource to external vendor: There are plenty of companies that provide SaaS platform for common mundane tasks. Examples include Workday for HR, Salesforce.com for CRM, etc. It may be worth moving these systems to these external vendors, which would eliminate the need to run these systems yourself.
  • Refectoring/Rearchitecting: There are times when you need to re-implement parts (of whole) of application again from scratch. Common reasons include frameworks or languages that are no longer supported or are very hard to maintain. For example, many applications in finance and insurance use COBOL, a business-friendly language from 1959, which very few people know today. It is best to replace these applications with Cloud Native Applications running on modern platform. As mentioned above, cloud native applications should follow a layered design with platform specific feature implemented into its own layer for easy portability.
  • Retire: In legacy systems, we often find there are components that have long existed but have no real use. This would be due to several factors like new systems were deployed but old systems were not decommissioned. Further, there could be cases where certain use-case scenarios that existed 20 yrs. ago and no longer needed today. Either way, these systems take resources & money to run and should be retired to free up those resources.
    Any migration process uses a combination of above strategies applied to different parts of the system. Again, this needs to be highlighted that there is no general strategy that can be applied everywhere. Every decided strategy has to be made on a case-by-case basis.

Verification & Operation:
Once we have the application migrated (and tested with sample data), the next step would be to put it in operation. Before we do that, we need to worry about data migration.

A system consists of Application and Data. We have talked about Application Migration but let’s focus on Data migration. There are two parts to data migration, both of which can be run in parallel. First part is migrating the old data/archival data. This would involve opening of a cloud data store and writing custom tools that would migrate the data from on-premise database to the cloud data store.

The second part is migrating live data. Turns out, migrating data in a live system is like changing tires on car running on a highway. In some systems, there is an event bus used for communication between various components. The easiest way in this case would be to have service that connects to this event bus and stores the events it received in the cloud data store. In the absence of an event bus, custom solutions need to be developed which would do same.

Once the data is migrated, we can start to deploy the application. The deployment is a very large topic and we will talk about it at high level. Typically, we do a staged deployment, where we gradually deploy the service only affecting a small part of user as opposed to releasing it to everyone. This has the benefit that if something fails, it will not affect all the users at once. Moreover, we keep the legacy systems in operation, just in case we need to revert to them if something fails in our new application.

Even after the deployment is completed, the old legacy systems are kept in operations for a while, just in case more issues are found later, and we need to rollback.

Conclusion:
In conclusion, enterprises rely on many systems today for their mission critical task that are hosted on-premise and use legacy technologies. It is a challenge to keep these systems running because these technologies are outdated and hiring people who have knowledge of these technologies is incredibly hard.
Unfortunately migrating these systems is a challenge. Often these enterprises defer the task of migrations until a point where the system stops functioning. This is a risky thing to do if the system stops functioning, the need replacement becomes imminent while newer system is years away from operation.

The approach to deal with this problem is a slow & gradual transition to the cloud, well ahead of time. This article discusses strategies that can be used to migrate legacy systems to the cloud using this approach.

If you need help in migrating your legacy applications to the cloud, Please feel free to contact me. My details are on my webpage www.nikhilbarthwal.com