Migrating a Monolith to Google Kubernetes Engine (GKE) — Migration Process

Get Cooking in Cloud

Priyanka Vergadia
Google Cloud - Community
7 min readFeb 17, 2020

--

Authors: Priyanka Vergadia, Carter Morgan

Introduction

Get Cooking in Cloud is a blog and video series to help enterprises and developers build business solutions on Google Cloud. In this third miniseries we are covering Migrating a Monolith to Google Kubernetes Engine (GKE). Migrating a Monolith to microservices can be intimidating. Once you decide to take it on, what are things to consider? Keep reading…

In these articles, we will take you through the entire journey of migrating a monolith to microservices, the migration process, what to migrate first, the different stages of migration and how to deal with data migration. Our inspiration for these articles is this solutions article. We will top it all with a real customer story walking through those steps in a real world application.

Here are all the articles in this miniseries, for you to checkout.

  1. Migrating a Monolith to GKE: An Overview
  2. Migrating a monolith to GKE: Migration Process (this article)
  3. Migrating a monolith to GKE: Migrate in stages
  4. Migrating a monolith to GKE: What to migrate first?
  5. Migrating a monolith to GKE: Data migration
  6. Migrating a monolith to GKE: Customer Story

In this article, we will introduce you to the migration process. So, read on!

What you’ll learn

  • Why microservices?
  • Where to start the migration
  • Migration process

Prerequisites

  • Basic concepts and constructs of Google Cloud so you can recognize the names of the products.
  • Check out the introduction of Get Cooking in Cloud series.

Check out the video

Why microservices?

In the previous article we laid out a list of reason why moving a monolith to microservices makes sense. Let’s review some of those:

  • Microservices based architecture is beneficial due to its loosely coupled components that can be independently tested and deployed — in theory, the smaller and simpler a component is, the easier it is to maintain and deploy.
  • Each independent service can be implemented in different languages and frameworks so you can use the right tool for the particular job you’re doing.
  • Each component can be managed by a different team reducing dependencies between teams. This clear boundary between services, allows teams to more easily design for failure and it becomes easier to determine what to do if a service is down.

But, Does this mean microservices are as perfect? No. Here is why — With a monolith platform every logical component can speak to every other component, pretty much by default, because they’re all part of the same whole. But with microservices — even though each individual component is simpler and easier to manage — how those services work together to communicate and to behave as a system, is more complicated and possibly slower due to the added networking latency.

Still, many — including Google who’s been doing this for many years — believe that microservices are a big win overall for most organizations. But, if you don’t design your microservices correctly, you may end up with a distributed monolith, that’s even worse than the monolith you had in the first place. And it can take months to actually migrate.

If you don’t design your microservices correctly, you may end up with a distributed monolith, that’s even worse than the monolith you had in the first place.

Where to start?

So, we definitely know it is not going to be simple to migrate a monolithic application to microservices. But to help simplify that process, let’s map the path from a monolithic, on-premises application to an application that’s fully hosted on Google Cloud (built with microservices). Defining the stating and end state helps pave the path for the migration and lay out the migration process.

Note: If you have already migrated your monolithic application as-is into Cloud, then the same steps we discuss here can be applied to create microservices architecture.

Starting state

Starting state
  • For most e-commerce platforms, the starting point is a monolithic application on-premises.
  • Your platform probably has load balancers to handle incoming requests from the web.
  • Those requests get routed to your applications servers.
  • These application servers processes the request by utilizing components like Cache, Database, search etc.
  • Additionally, your app servers may send requests to other backend systems.

This is a generalized application flow, it can’t be fully representative of your current system — but it should serve as an example that’s similar in theory to yours, so you can migrate it towards our target architecture.

End state

Now, let’s talk about the end-state and what we are trying to achieve. We definitely want to achieve the same functionality as in the monolith except now instead of running a monolith, we would break that up into individual microservices running in portable, deployable units of code (called containers) using Google Kubernetes Engine (GKE)— a platform to handle scaling, hosting, and deploying containers.

End state or desired state

In this architecture, each microservice runs in it’s own containers and makes calls to the backend system through a secure network connection. We will still route internet traffic through a load balancer, but now traffic is routed into separate microservices — instead of into the monolith application.

This lets us update version of individual microservices while only minimally affecting other services. And in our target architecture, the microservices may interact with a number of other Google Cloud products — Cloud Storage, Cloud SQL, and Cloud Pub/Sub are common tools for e-commerce applications.

Migration Process

Now that we have the end state and the start state defined, how do we make the migration happen?

Communication between services on Google Cloud and the monolith on-premises

One of the most important decisions you must make early during such a migration is, how to handle communication between the new microservices hosted on GKE and your legacy systems on-premises. Since we’re migrating in-stages there will be large periods of time where components of your platform will be living in both GKE and on-premises.

There are two main solutions, and they can co-exist:

Apigee between legacy services on-premises and new microservices in GKE
  • API-based connection: In this type of connection, you use an API management solution such as Apigee as a proxy between the two environments. This gives you precise control over what portions of your legacy systems you expose and how you expose them. It also lets you seamlessly refactor the implementation of an API (moving from a legacy service to a microservice) without impacting the consumers of the API.
Cloud VPN between legacy services on-premises and new microservices in GKE
  • Private network connection: In a solution based on private connectivity, you connect your Google Cloud and on-premises environments using a private network connection. The microservices communicate with your legacy systems over this connection. You can set up IPSec-based VPN tunnels with Cloud VPN. For larger bandwidth, high available and low-latency needs, Cloud Interconnect is a better option.

Now, let’s compare and contrast the two communication options.

  • Compared to private connectivity, an API based solution is implemented by the application teams and requires a greater integration with the legacy application, from the get-go. So, it is harder to set up but provides more management options in the long run.
  • On the other hand, a solution based on Cloud VPN or Cloud Interconnect will be implemented by a networking team and initially requires less integration with the legacy application. But, it does not provide any added value in the long term.

Now that we know more about these communication options, we can combine the two approaches and use them to our benefit. For example, we may use Apigee only for the public APIs — in front of the microservices and legacy systems, but not between them — and we may use Cloud VPN or Cloud Interconnect for the communication between the microservices and legacy systems. This option combines the benefits of the two approaches, but is more complex to manage.

One of the other choices you will have to make when migrating is — which Google Cloud technologies to use for databases. For example, do you want to use Cloud Bigtable to store data or do you want to keep using some of your current technologies, because they’re still relevant to you or because the cost of moving them is too high. You should think about these choices as a part of the migration process. Of-course the answer to this question will depend very specifically on your scenario.

Conclusion

The main point — migrating a monolith to microservices on GKE is a complex process that won’t happen overnight — planning out beforehand (how will my services communicate — how will I manage data — what to migrate first), helps to make sure this process goes smoothly.

Migrating a monolith to microservices is a complex process that won’t happen overnight — planning out beforehand (how will my services communicate, how will I manage data, what to migrate first), helps to make sure this process goes smoothly.

If you’re looking to migrate your existing monolithic platform to the cloud, you’ve got a small taste of the migration process. Stay tuned for more articles in the Get Cooking in Cloud series and checkout the references below for more details.

Next steps and references:

--

--

Priyanka Vergadia
Google Cloud - Community

Developer Advocate @Google, Artist & Traveler! Twitter @pvergadia