Improving my AWS Cloud-Based Airline Booking Service: From Prototype to Production 😎

Nate
8 min readJun 23, 2024

--

Introduction

Hey everyone 👋, I’m back! In this blog, I will show you how I transformed my prototype airline booking project from the looks of a simple college assignment into a robust, production-ready application! First, let’s recap where we left off with the project before diving into the improvements.

Recap of Part 1

This is a continuation of my work done in part 1, if you haven’t already read about it, here is the link to the medium article 😊, or you can get a TLDR below 🙄.

In Part 1, we built a cloud-based airline booking service using FastAPI for our API, Docker for hosting our containers, and SQLAlchemy to manage our database. The project was structured into four main services: Booking, Payment, Notification, and Flight Management. This API allowed us to many things such as handle flight bookings, manage payments, and keep track of seat availability. While functional, there were many areas for improvement, such as scalability, reliability, and the need for better integration with cloud services to handle larger volumes of data and traffic.

Project Overview

In part 2, my goal was to make this project more production-ready by leveraging various AWS services. I also integrated additional technologies and services to make it more suitable for a production environment!

Tech Stack — Added in Part 1

  • FastAPI: For building the API.
  • SQLAlchemy: For database interactions.
  • Docker: For containerizing the application.
  • Uvicorn: As the ASGI server to run FastAPI applications.

Tech Stack — Added in Part 2 😎

  • Spring Boot: For building the Flight Management Service, improving the backend with a scalable Java framework.
  • AWS EC2: For hosting backend services.
  • AWS ECS: For container orchestration.
  • AWS S3: For storing static assets and media files.
  • AWS RDS: For managing relational databases with PostgreSQL.
  • AWS DynamoDB: For handling payment transactions.
  • AWS Lambda: For implementing serverless functions.
  • AWS SQS: For message queuing.
  • AWS CloudWatch: For monitoring logs and metrics.
  • GitHub CI/CD: For automating the build, test, and deployment process.

Integrations

Let us talk about why we integrated each and what cool things we can with them!

Spring boot

I decided to integrate Java’s backend framework Spring boot into my flight management service and I used a PostgreSQL database via AWS RDS. First, I created a maven package via Spring Initializr and downloaded it into my project file. I then created the necessary files for my API. The last step was to compile and run my project.

src/main/java/com/example/flightmanagementservice/ directory | You can see all files at the github repo, the link at bottom of article

After that, I tested my API using Postman. Let’s first create a flight into our database!

POST request: http://localhost:8080/api/flights

Now that I have created a flight into my database, let’s use my get all flights endpoint.

GET request: http://localhost:8080/api/flights

This request above returns all existing flights from our database. We can even update an existing flight via the flight’s ID! Let’s change our arrival location from Los Angeles to San Francisco!

PUT request: http://localhost:8080/api/flights/2 (2 is our flight ID)
PUT request RESPONSE

The existing flight’s is now changed from Los Angeles to San Francisco! Lastly, if we want to delete a flight from our database, we can use the DELETE method.

DELETE request: http://localhost:8080/api/flights/2

Our flight from New York to San Francisco is now deleted from our database.

AWS EC2

AWS EC2 (Elastic Compute Cloud) is where we initially hosted our containers using Docker Compose. By using EC2, we had full control over our computing resources and were able to set up and configure the environment according to our needs.

Our containers were successfully deployed on our ec2 instance!

AWS ECS

AWS ECS (Elastic Container Service) is a fully managed container orchestration service. We used ECS along with our CI/CD pipeline to automate the deployment process. ECS handles the provisioning and management of the underlying infrastructure, making it easier to scale and manage our containerized applications.

Running on all cylinders!

AWS Lambda

AWS Lambda allowed us to do something really cool! Whenever a user would create a booking, our notification service uses SQS to pickup the request. We then created a custom lambda function so that we automatically get sent an email with their booking details!

POST booking/ request sent by user
Booking confirmation email recieved immediately after!

Isn’t that so cool 🤩! I decided to attach an image of our lambda function since it has to be in the AWS environment and not in our repo.

lambda function

AWS S3

AWS S3 (Simple Storage Service) was used for our /upload/{booking_id} and /download/{filename} endpoints. S3 provides scalable storage for static assets and media files. This allowed us to upload any document from our PC for a booking (think of a customer uploading their passport) and later download that same file by its filename!

Let’s upload a simple .txt file named “sample-document1.txt”

Payload for POST /upload/{booking_id} request (returned 200 status)

Now, lets download that same file by sending a POST /download/{filename} request.

Payload for POST /download/{filename} request

Lets send the request!

Response for POST /download/{filename} request

And once we click on the Download file button,

Automatically downloads file to my local machine

it downloads the file to my pc. Thanks AWS S3!

AWS RDS w/ PostgreSQL

AWS RDS (Relational Database Service) with PostgreSQL was used to manage our relational databases. We used RDS for both our booking_service (FastAPI) and our flight_management_service (Spring Boot). RDS handles database management tasks such as backups, patch management, and scaling.

The 2 databases we used via AWS RDS

AWS DynamoDB

AWS DynamoDB is a fully managed NoSQL database service. We used DynamoDB for our payment_service to handle payment transactions. DynamoDB offers low latency and scalability, making it ideal for processing high volumes of transactions quickly and reliably.

Our Payments database

AWS SQS

We used AWS SQS (Simple Queue Service) to help us pickup requests that were sent to the API. We decided to go with a first in first out type of queue to make sure a scenario such as overbooking doesn’t occur.

Once we had this enabled, we could leverage SQS to enable our lambda function which sent out an email once a user confirmed a booking!

Message our FIFO queue receives once a user uses the POST booking/ request

AWS Cloudwatch

For detailed logs and metrics, AWS CloudWatch is used to monitor and log the performance and health of our application. This was really helpful when I was encountering constant errors when I tried to integrate AWS ECS.

Cloudwatch logs

GitHub CI/CD

I set up a CI/CD pipeline using GitHub Actions to automate the build, test, and deployment process. This pipeline ensures that every change pushed to the repository is automatically tested and deployed to our ECS cluster, streamlining the development and deployment workflow. I had to copy over my AWS Account Key and AWS Secret Key to my GitHub repo to authenticate myself.

My Repository Secrets

After that, I wrote the ci-cd.yml and edited my task-definition.json file. Lastly, I just needed to make a commit and push to my repo. This would start the building process and make my task go live!

All the steps in my pipeline

Going Even Further 🚀

While this project has come a long way, there’s always room for improvement and new features. Here are some ideas I have in mind that I might implement in the future:

  • Implement OAuth 2.0 for secure authentication and authorization w/ AWS Secrets Manager.
  • Use AWS AppSync to provide real-time updates to clients such as viewing live updates on flight availability and status.
  • Use AWS X-Ray to trace and analyze requests as they travel through my application.
  • Extend and build out the API way more extensively!

Challenges and Solutions

This took a lot of time, here are some of the hurdles I faced and how I overcame them😵:

  • Initially, integrating multiple AWS services was a bit overwhelming. Each service has its own set of configurations and best practices. I spent a significant amount of time reading documentation and experimenting with different configurations. AWS CloudWatch was incredibly helpful for debugging and understanding what was going wrong.
  • Setting up the CI/CD pipeline was another challenge. I had to ensure that the pipeline correctly built and deployed the Docker containers to ECS. The key was to break down the process into smaller steps and test each one individually. Once I was confident that each step worked correctly, I combined them into a single workflow.
  • I encountered numerous permission and network errors along the way. Often, I had to check and adjust permissions for IAM roles and inbound rules for security groups. These checks became routine, and over time, I developed a better understanding of AWS security practices.

API Docs

If you would like to see the API docs for my project, you can view each services’ here (courtesy of SwaggerUI) :

Links to the GitHub Repo and my LinkedIn

GitHub Repo

My LinkedIn

Thanks for reading, and feel free to reach out if you have any questions or feedback!

--

--