Navigating the Cloud: A Journey of Migrating Our Application to AWS Kubernetes (EKS) — Part 1

Alamsyah Ho
Reflex Media
Published in
5 min readJan 23, 2024
Photo by Growtika on Unsplash

In the dynamic world of technology, businesses continuously seek ways to modernize applications, focusing on improved scalability, efficiency, and overall performance.

This narrative explores our migration journey at Reflex Media, shedding light on the challenges and triumphs when transitioning our application to AWS Managed Kubernetes (EKS) in Part 1 of a 2 part series. Navigating the ever-evolving tech landscape, we encountered complexities and valuable lessons during this transformative process. From scalability enhancements to operational efficiency, the shift to AWS Managed Kubernetes reshaped our application management and deployment.

This article stands as a testament to our team’s resilience, showcasing determination and adaptability in the face of application modernization challenges.

Assessment and Planning

Photo by Kelly Sikkema on Unsplash

The first step in our migration journey was a comprehensive assessment and understanding of our application. We scrutinized its architecture, dependencies, and potential bottlenecks. A detailed plan was crucial, outlining the migration strategy, resource requirements, and potential roadblocks. Understanding the intricacies of our application sets the foundation for a successful migration.

Kubernetes-Ready Architecture

Photo by Arnold Francisca on Unsplash

A Kubernetes-ready architecture thrives on the statelessness of applications, mandating that migrating applications refrain from storing any session or data within containers. For handling essential application logs crucial for troubleshooting or debugging, we seamlessly implement Filebeat to forward all logs to our ELK stack.

Stateless design allows seamless scalability, enabling applications to effortlessly scale horizontally without the complexities of managing persistent states. This simplicity extends to dynamic orchestration, a crucial feature of Kubernetes, facilitating the efficient auto-scaling of our Kubernetes pods.

Additionally, Statelessness empowers easy deployment and replication by eliminating dependencies on specific states or stored data. This characteristic streamlines maintenance tasks, supporting hassle-free upgrades and changes without compromising the application state. Moreover, the fault-tolerant nature of stateless applications enhances system resilience, contributing to high availability in Kubernetes deployments.

Containerization

Photo by Venti Views on Unsplash

Containers emerged as the cornerstone of our migration strategy. Before the transition, achieving software and configuration consistency across various environments (development, testing, staging, and production) was a complex task. A simple software upgrade required careful planning, coordination, and scheduling across multiple environments. However, with the containerization of our application, this process became seamless. Now, all we needed was to define the software version in the container build definition file(Dockerfile), ensuring uniformity across environments.

Eventually, This not only facilitated a smooth transition to Kubernetes but also streamlined and optimized our development and deployment processes.

AWS EKS(Elastic Kubernetes Service) and Fargate

Opting for AWS EKS (Elastic Kubernetes Service) and Fargate reflects a strategic choice in the realm of cloud-native application deployment. AWS EKS stands out as a fully managed Kubernetes service, leveraging AWS’s expertise to handle infrastructure intricacies seamlessly. It assures high availability, robust security, and smooth integration with other AWS services, allowing us to concentrate on application development and deployment rather than Kubernetes management.

Complementing this, Fargate serves as a serverless compute engine designed for containers. By abstracting away underlying infrastructure concerns, Fargate simplifies operational complexities, enhances resource utilization, and automatically scales based on workload demands. This is particularly beneficial for organizations emphasizing ease of use, cost efficiency, and rapid deployment.

Continuous Integration/Continuous Deployment (CI/CD)

The integration of CI/CD pipelines proved pivotal in automating our deployment processes. Before using Kubernetes, our deployment relied on tools like Jenkins and Ansible. However, while developing Ansible scripts for managing Kubernetes object definitions, a critical limitation surfaced: Ansible lacked the capability to remove or delete unused objects unless explicitly specified to be absent. In response, we conducted thorough research and seamlessly incorporated Helm charts into our Kubernetes deployment strategy.

In this setup, Helm serves as the package manager for tracking all Kubernetes object definitions, orchestrating the deployment workflow alongside Ansible. Simultaneously, Jenkins acts as the user interface, simplifying deployment across various environments. This empowerment allows our teams to independently deploy their feature or release branches on development or test environments within the Kubernetes cluster, fostering overall efficiency and ease of use.

Migration Strategy

Photo by Startaê Team on Unsplash

Having relied on AWS Elastic Load Balancer (ELB) for several years, our migration strategy centered around seamlessly directing a portion of our production traffic to the application on our EKS cluster, all without disrupting our customers.

To achieve this, we employ the AWS load balancer controller to effectively manage ELB within our Kubernetes cluster. While I won’t delve into the details here, here’s an article by Kubernetes explaining how it works. In simplified terms, our pods are automatically assigned to the ELB target group, supporting rolling deployments and enabling zero-downtime deployment, crucially facilitated by the AWS Load Balancer Controller.

This strategy also grants us the capability to distribute traffic between our EC2 instance target group and Kubernetes pods target group. In the event of any issues, we can swiftly disable the Kubernetes target group, providing the flexibility for an immediate rollback to the EC2 instance target group. There was one instance during the migration when we ran into an issue with Amazon-provided DNS rate limiting. Luckily, we identified this issue early on, as we had directed only a small percentage of our production traffic to the Kubernetes target group. Had we opted for a full transition of production traffic to our Kubernetes cluster, it could have resulted in a significant disruption to our customers.

Conclusion

Migrating a legacy application to AWS EKS is a challenging but rewarding journey. It demands careful planning, research, collaboration, and a willingness to embrace change. Our experience showcased the transformative power of cloud-native technologies, and the lessons learned have not only modernized our application but have also positioned us for future growth and innovation. As we navigate this new landscape, we recognize that the journey is ongoing, and we are excited about the possibilities that lie ahead in the cloud.

Stay tuned for Part 2 as I will share the challenges we faced post-migration and elaborate on the strategies we employed to effectively tackle and resolve each of them.

--

--