Considerations when migrating applications from OpenShift to AWS EKS

Oscar Salazar
Globant
Published in
8 min readMar 29, 2023
Source: Freepik

When companies look to save money, IT is often one of the departments affected, particularly if the savings are substantial. In my case, this occurred when management decided to discontinue support for some platforms that had not been utilized for an extended period, leading to the closure of our datacenter infrastructure. One of these platforms was OpenShift, a RedHat-based platform that leverages Kubernetes to build, deploy, and manage containerized applications using a microservice’s architecture. OpenShift housed numerous applications that required urgent migration within a limited timeframe. The primary objective of this migration was to transfer all applications from OpenShift Container Platform to EKS while maintaining their original behavior, including autoscaling, container allocation for each microservice, and sensitive data storage, all within a short period of time. In other words, the focus was solely on migrating the applications with no additional alterations.

My purpose with this article is to share and let you know some key considerations that you need to keep in mind when migrating applications from OpenShift to AWS EKS.

Prerequisites

  • An existing OpenShift platform with the applications to be migrated
  • An AWS account to create the EKS cluster
  • A Jenkins instance, where all the CI/CD pipelines for the applications are hosted
  • Some git repositories to host the applications code and the YAML files to deploy the containers. The company works with Gitlab

Considerations at platform level

When doing this type of migration, you need to define what infrastructure or platform you want to use next. Here comes questions like “what are we going to move?” or “How big must the new ‘home’ be?”. These kinds of questions must be resolved to successfully migrate your applications smoothly and without any issue. Now, let’s get started!

Where are you planning to move the applications?

When migrating applications from OpenShift, the first thing to keep in mind is where are you thinking to move the applications. This company has been using AWS as their main cloud provider for a long time and decided to migrate all the applications running on OpenShift to AWS EKS. This aims to change from a fully supported and managed platform (inside their datacenter) with its own license and support contract, to a cloud environment using IaaS, PaaS and all of its benefits.

AWS EKS is a managed Kubernetes platform to host and manage containerized applications in AWS. Applications can be deployed by using YAML files that define their desired behavior. You can automate the deployment of applications by using a third party tool, like Jenkins, Helm, FluxCD, ArgoCD, etc.

How to manage existing environments?

As OpenShift was installed on-premises, all the environments for each application were co-existing in the same OpenShift instance, but isolated only by namespaces. The environment was given a different namespace in OpenShift, here is an example:

The main goal was to isolate each environment from each other. To achieve this, an EKS cluster was created for each existing environment. This also leads to a major availability when outages occur.

How will you create the EKS clusters and all the needed infrastructure?

Source: Freepik

In this project, CloudFormation was used to define all the resources needed by the EKS cluster (including VPC’s, Storage, etc.) as Infrastructure as Code (IaC). IaC allows us to maintain the infrastructure immutable and fully controlled, avoiding configuration drift and the need to create any resource manually.

A repository was created to host all the CloudFormation templates for the creation of the EKS cluster and all required resources.

Infrastructure and resource sizing

The sizing of the resources to use is one of the most important things when doing a migration. There is a really tight line that if you cross it, you will be billed for unused capacity, you should keep in mind the growth of the applications. To avoid excess billing, it’s highly recommended to use the auto-scaling rules. One example of this is doing a good sizing of the VPC CIDR block range, making a mistake here will lead to the creation of the whole cluster that is being used.

Spend your time making the correct sizing, and share it with your team. Maybe you need to arrange more than two meetings to align all the teams in the same direction and get all the needed information about how big the cluster (or clusters) must be.

Considerations for applications

Now, after having reviewed the considerations with the platform migration, let’s focus on what you have to do to migrate your applications.

How should I start?

The first thing to think about is to focus on just one application! Now, the question is: Which one? Start with a simple application, choose one that has few or no dependencies, like DB access, Configuration data given by another application, APIs, etc. Focus only on making that application being built and deployed in EKS, and be sure that the containers are in a healthy state. When you have migrated this application, you will realize that you can repeat the same steps for a big portion of the rest of the applications.

Translate the YAML files from OpenShift to Kubernetes

When talking about migration from a containerized platform to another one, it’s common to create the containers and define their behavior using YAML files. Both OpenShift and EKS use this configuration format, but there are some differences between equivalent configuration objects. You need to “translate” the objects from OpenShift to be readable by EKS. Here is an example of a DeploymentConfig object for OpenShift and how it must be rewritten for EKS, using a Deployment object:

Identify the required parameters for each file that you need to replace, rename or remove. For example, apiVersion and kind values must be replaced to make the deployment file understandable for Kubernetes.

If you want to see the files used in the previous diff, go here for the OpenShift manifest and here for the EKS one.

Keep in mind that depending on the number of applications to be migrated, the files to translate could be numerous. To do this task, it’s highly recommended to create a script that makes the changes and adjustments required.

Please note that you may find that some resources that are required to deploy into OpenShift are not needed to deploy into EKS.

Secrets, are there any secrets?

Source: Freepik

You need to define how the secrets, certificates and all sensitive data will be managed in EKS.

It’s common for applications to interact with each other, and based on the implemented security, you will need certificates, tokens and passwords. Remember that the primary objective was to “keep everything as similar as possible”. Tokens and certificates used in the communication between applications and web pages were decided to be stored in Kubernetes secrets.

When talking about deploying applications in EKS, the credentials for the repositories were stored in AWS Secrets Manager. Planning how you will store and access the sensitive data will give you an idea of what infrastructure you must create and support.

Pipelines, pipelines!

Now that you have the first application deployed inside EKS, it’s time to automate the process using Continuous Integration (CI). For this, the main goal was to build the application in the same way that was built as Jenkins did.

You need to know the location of the repositories and Container Registries. Consider that you can use the existing tools to control and automate the CI process. In this case, as the SCM tool was GitLab, there was no need to use credentials for CI, the defined tool to achieve this task was GitLab CI/CD.

Continuous Delivery, any idea?

The goal here is to automate the Continuous Delivery process, to deploy the application into the EKS cluster. This means automating the synchronization of configurations stored in a git repository with the EKS clusters. When OpenShift was used, the Continuous Delivery was done by Jenkins. One of the decisions of the company was to shut down all the Jenkins instances and adopt new technologies for CI/CD. GitLab CI was the first option, but as it does not have a native tool to control Continuous Delivery, and FluxCD had been implemented before with other projects, FluxCD was chosen.

With FluxCD, there are many ways to define how you want to deploy the application based on some rules and policies. In this migration, two policies or rules were used in conjunction.

Monitor repository path

FluxCD has the feature to monitor a defined path inside the repository. This means that if FluxCD detects any change in the previously defined path, it will deploy the application with the new changes. To configure this feature, you can use the following commands:

First, create the source git:

flux create source git <name_of_your_source> - url=<repository_URL> - branch=<branch_to_monitor> - namespace=<target_namespace_to_deploy>

Then, create the “kustomization” aiming to the source created: in the first command. Please note that the –path parameter is where you define the desired path to be monitored.

flux create kustomization <name_of_your_kustomization> - source=<name_of_source_created> - path=<repository_path_to_monitor> - prune=true - interval=5m - namespace=<target_namespace_to_deploy>

Monitor docker images tags

It’s a policy that lets FluxCD monitor the Container Registry images based on existing tags. You can define what tags will be deployed using regular expressions. If you want to create this kind of policy, follow these steps:

Create the image repository:

flux create image repository <name_of_your_image_repository> - image=<container_registry_name> - interval=5m - namespace=<target_namespace_to_deploy>

Create an image policy aiming at the image repository created in the first command. The — select-semver flag is used to tell FluxCD which images tags it should monitor. For this example, the monitored tags are equal or major to 1.0:

flux create image policy <name_of_your_image_policy> - image-ref=<name_of_image_repository_created> - select-semver='>=1.0' - namespace=<target_namespace_to_deploy>

It’s recommended that more than one rule or policy be defined by each application repository.

Conclusions

Properly sizing the new infrastructure is crucial when migrating applications. It’s essential to take the time to make accurate estimates; otherwise, you may face unnecessary rework that could have been avoided with a solid resource estimate.

Translating YAML files from OpenShift to EKS is a critical task to successfully deploy applications. Defining a strategy for controlling application deployment in EKS is also necessary. While there are various methods for achieving this, we used FluxCD as our tool of choice. However, it’s important to note that this is not the only option available for managing deployments.

References

--

--