Microservices & DevOps at AirAsia

Microservices and DevOps have been playing a key role in the Cloud era to scale the web application, cut down the cost, manual intervention and streamline the deployment process. And yes, implementing these two at Airasia.com, we’ve seen the significant results.
This blog post will focus on the fundamentals and the end architecture for Microservices & DevOps pipelines. If you understand and find it relevant, then the implementation is just a small journey !!
Let’s get started…
Microservices at AirAsia:
Microservices(Microservice Architecture): According to Chris Richardson, it’s an architectural style that structures an application as a collection of services that are:
. Highly maintainable and testable.
. Loosely Coupled
. Independently Deployable
. Organized around business capabilities
. Owned by a small team
Now, it’s time to take a dive into the microservices architecture for one of our projects, Checkin. Yes, you’ve heard it right, AirAsia Web Checkin.

AirAsia Web Checkin relies on tens of microservices in Node.JS / Python deployed over Google App Engine. And yes, this architecture has been helping us in scaling our web application for millions of guests in doing their Web Checkin successfully.
GAE is a managed service provided by Google. You only need to pay for the infrastructure capacity your app uses and need not worry about scaling.
Let’s see some of the advantages of Google App Engine (GAE):

- Support multiple languages.
- Allows managing resources using the command line.
- Run multiple versions of the app at the same time and can split the traffic across them.
- Automatically manages the instances across multiple Availability Zones.
Every service in the app has its app.yaml file, which acts as a descriptor for deployment. A sample file for Node.JS is shown below:
runtime: nodejs10 # Name of runtime used by the app
instance_class: F4_1G # Instance class for the service.env_variables: # To define env variable to make it available to app
BUCKET_NAME: "bucket-name-1"
handlers: #list of url patterns and descriptions of handling them
- url: /stylesheets
static_dir: stylesheets
- url: /.*
secure: always
redirect_http_response_code: 301
script: auto
But in this constantly changing cloud era, we’re evaluating more options like Cloud Functions, Kubernetes, etc to further cut down our cost and improve the overall performance.
With the tens of microservices, there comes another challenge and this time it is related to managing these services, fastening the deployment process, testing the code before deployment and much more. The next section will focus on all such problems.
DevOps at AirAsia:
With a small team size for most of the projects and the fast-paced environment, the manual deployment process for tens of microservices used to waste a lot of time for our development teams.
Leveraging the power of Gitlab CI, we’ve implemented continuous integration and continuous deployment for those tens of microservices running on Google Cloud across multiple environments (Staging, PreProduction, Production) while keeping security in the mind.
Before getting a deep dive to the DevOps pipeline, let's understand some of the terminologies:
Continuous Integration (CI): With every push to the remote repository, we can create scripts that can build and test the applications for any error automatically.
Continuous Delivery: In addition to the CI, the app gets deployed continuous, though the process requires manual intervention.
Continuous Deployment (CD): Instead of manual, automatic deployment takes place here.
Pipelines: Top-level component of continuous integration, delivery, and deployment.
Pipelines comprise:
- Jobs: It defines what to run. For example, code compilation or test runs or deployment.
- Stages that define when and how to run. For example, deployment runs after the build process.
Let’s have a look at a sample pipeline having sequential and parallel jobs.

This pipeline has been created with the goal of installing dependencies (Job1) at the first sequential stage and then deploying to multiple environments(Job2 & Job3 parallelly) at the second sequential stage using shared or dedicated runners.
The above pipeline can be achieved by having the following lines in the .gitlab-ci.yml file.
stages: # Define stages in pipeline which can be used by jobs.
- install
- deploy preprod prod# Multiple Stage pipelines are created using above stages.
Install: #Job1
image: npm image path # Used to specify docker image for the job
stage: install
script: npm install # Shell script which is executed by runner.
artifacts: # Specify a list of files/dir attach to Job
paths:
- node_modules/
expire_in: 30 days
only: # Include the job if all the conditions are matched.
refs:
- pre-production
- production
Deploy PreProduction: # Job 2
variables: # Job specific variables
GAE_SERVICE_NAME: Service A
APP_YAML: app.yaml
stage: deploy preprod prod #Stage linking part
image: image path
dependencies:
- Install
script:
- deployment scripts
artifacts:
paths:
- $APP_YAML
expire_in: 30 days
only:
refs: # Job is created for production branch schedule or run
- production
Deploy Production: # Job 3
variables:
GAE_SERVICE_NAME: Service A
APP_YAML: app.yaml
stage: deploy preprod prod # Same stage name makes parallel jobs
image: image path
dependencies:
- Install
script:
- deployment scripts
artifacts:
paths:
- $APP_YAML
expire_in: 30 days
only:
refs:
- production
And yes, a lot of things like testing, sending success/failure messages to the collaboration platforms, decrypting secret files, etc can be clubbed in between or end as a job and we’ve done all of that which has been helping us in removing the manual intervention and has fastened up our go-to live process.
These were all our learnings which might help you in building up your next project.
Clap, if you found it useful !!
