Production-ready CICD setup with Azure DevOps, AWS, and Terraform
Continues Integration and Continues Deployment is the common practice in a modern agile development environment, the concept itself does not have a fixed and one size fits all solution. It requires you to under the core concept and the value of CICD practices and adjusts and fit into your own process. It is very easy to get started, but it requires a lot of effort in planning, design, and collaboration to make it fits well into your environment. In this article, we will explore the production-ready CICD setup using the Azure DevOps platform, use Terraform to deploy into the AWS environment.
Why Azure DevOps and AWS
Before we start, people will properly wonder why we not using AWS services for CICD setup. The main reason is Azure DevOps provides a fully integrated experience to Development Team, it covered the workflow from issue and ticket management, development, testing, and deployment. A fully integrated platform allows the team to focus everything they need in one place, and it reduces the resources wasted when context switching happens.
What you expect to know
This story will not teach you the basics of CICD. CICD is a very common topic, you can easily search tons of the same information elsewhere. In this story, we focus on advanced options to run a scalable and maintainable CICD architecture. You are expected to know the following:-
- Basic CICD knowledge
- Basic Yaml scripting
- Basic Terraform scripting
- Basic Azure DevOps configuration
- Working experience of AWS Elastics Container services
- Working experience with docker
- Understand Microservices concept
- Understand 12 Factor App Principle
- Working experience with any branching strategy
Project prerequisites
If you want to follow the steps, you need the following tools and setup:-
- Azure DevOps Account
- AWS Account
- Terraform 1.0+ installed
- Docker installed
Sample Project
This post will use one sample project from here. So that we are able to configure the CICD to follow the project requirements. The sample app node-micro-app-demo consists of 2 APIs, Product API, and User API. It was developed using NodeJS Express Backend with TypeScript. Each service has access to its own data storage using Postgres with the Prisma framework.
High-Level Objective
The following objective to be set to control our scope of works:-
- Setup Continues Integration pipeline to validate code changes
- Continues Integration pipeline should generate new artifacts after the code validated
- Continues Deployment pipeline should deploy when a new configuration change
- Infra components should manage with Continues Deployment
High-Level Diagram
The diagram above explains the high-level integration flow using AWS ECS Fargate Architecture. The CICD Architecture utilizes the Azure DevOps platform to manage source code and pipelines.
CICD Design and Planning
We will walk through all the design and planning factors to give more context to the entire CICD architecture.
Repository Strategy
This topic asks the following question:-
- What type of repository do we choose for this project?
- How many repositories do we need for this project?
Answer:-
- We will use a Monorepos setup for the project. Compare to the Multirepos setup, there are many benefits for the Monorepos setup, the main benefit we choose for this setup is Simplify share modules and Automic commit when performing large-scale refactoring. You might create doubt when you understand Monorepos for the first time, but believe me, all technology giants such as Facebook and Google make their product development successful with Monorepos.
- We will have 3 repositories for this project, 1 is the World repository, 1 is the Application repository, and 1 is the Infra repository. The World repository will be storing all the settings that need to configure Azure DevOps, such as pipeline definition, repository permission, and more, the term The World uses to explain the idea it is the first repository you need to configure. The main reason for splitting Application and Infra configuration as different repositories is to align the 12-Factor Application principle. To be precise is Principle I-Single Codebase, Principle III-Config in Environment, and Principle V-Build, Release and Run separation.
Branching Strategy
Every team and project must have a branching strategy they agree on and follow. Team members must follow tightly on development workflow, and the CICD workflow will build around the branching strategy. There are many types of branching strategies, for the simplicity of this post and CICD demonstration, we will use Trunk-Based Development (TBD) for our branching strategy. TBD enforcing simplicity and frequently integrating your code to reduce the chances of Merging Hell.
Dynamic deployment or GitOps Approach
There are 2 ways to deployment approach when you want to provide input to your CICD system on which version of code to deploy to. 1 is Dynamic deployment, where you properly heard of using the Latest tag in the image artifact, or variable to store the version you want to deploy, you can configure or pass this value using pipeline variable, or global settings in your CICD platform. The GitOps approach is another practice in that all the configuration changes will record in the Git system, and changes must be reviewed using the Pull Request method. In this demo project, we will use the GitOps approach to record the image version.
Pipeline Configuration Approach
There are 2 ways to the Pipeline configuration approach. 1 is Pipeline configuration from UI, which is the most common and easy-to-start approach to configure your pipeline, but it has lots of problems such as scalability issues. Another approach will be Pipeline as Code approach, which addresses the scalability issue, and is more aligned with the developer's experience to manage configuration as code. In this project, we will use the Pipeline as a Code approach.
Evolutionary Database Design
In agile development, most of the time developers will ignore how we should manage database changes in agile or automated ways, changes and migration of databases are considered critical actions, and people will tend to use manual ways to manage database changes. With Evolutionary Database Design, changes in the database can be managed as how we manage application code with Continues Integration and Continues Deployment. In this project, we will demonstrate Evolutionary database design with Continues Deployment with Prisma migration.
Create The World Repository and Pipeline
The World Repository, or you can call the God Repository, is the GitOps repository that manages everything we need to configure Azure DevOps. And this is the only repository and related pipeline you need to manually create. Other Azure DevOps configurations will rely on this repository.
Create Terraform S3 Backend
We use Terraform to provision all configurations in The World Repository. To persist your terraform state when maintaining it with multiple team members, you are required to create persistent storage to store the state file, there are multiple ways to set up your Terraform Backend, in our case, we will use S3 Backend with DynamoDb table. Configure your AWS credential in your local and follow the steps.
Create the Cloudformation script to your root as backend.yml:-
Create the backend with AWS CLI:-
aws cloudformation create-stack --stack-name terraform-backend-setup --template-body file://backend.yml --parameters ParameterKey=LockTableName,ParameterValue=terraform-state-lock ParameterKey=StateBucketName,ParameterValue=terraform-s3-state-random
Check your stack status with the command:-
aws cloudformation describe-stacks --stack-name terraform-backend-setup
If your stack create not successful, most of the case is your S3 bucket name is not available, try change S3 bucket name and try again.
Install Azure DevOps extension
Install the following extension to your Azure DevOps Organization to allow your Azure DevOps account to deploy with Terraform and AWS tasks:-
- https://marketplace.visualstudio.com/items?itemName=ms-devlabs.custom-terraform-tasks
- https://marketplace.visualstudio.com/items?itemName=AmazonWebServices.aws-vsts-tools
Create New Account and Project
This project requires an account in Azure DevOps, make sure you register one here if you do not have one yet. And create one project for this project as below:-
For demo purposes, it is advised to choose Public Visibility so that you can have 10 parallel jobs by using a Microsoft-hosted agent.
Create a new AWS Service Connection
Service Connection is the setting you can configure to access various services securely such as AWS. Go to Project settings > Service connections > New service connection, and fill in your AWS Access and Secret key that you want to use for Terraform permission. Give the service connection name AWS_CONNECTION if you like to follow my step.
Generate Personal Access Token
PAT or personal access token is a special secret used by Azure DevOps to run as a service account to access the Azure DevOps service. We will generate one PAT to be used by Terraform to provision our repository and pipeline in CICD. Go to your profile and open the personal access token page.
Create a new token with Agent Pools Read & Manage permission, Code Full permission, and Build Read & execute permission.
Store your PAT token so that we will use it later.
Configure variable group
The variable group is the environment variable set that is to be used during our pipeline execution, we need to create a few variable settings to properly configure our pipeline. Navigate to Pipelines > Library > + variable group.
Create a new variable group with the name common-vars with the following variables:-
The value is to be input as the following:-
- AZDO_PERSONAL_ACCESS_TOKEN — <fill in your PAT token that was previously generated, enable secret value>
- ECR_CONNECTION_NAME — AWS_CONNECTION
- ECR_PROJECT_NAME — demo
- ECR_REGION_NAME — ap-southeast-1
- NODE_BUILD_VERSION — 14.x
- POOL_IMAGE — ubuntu-latest
- PROD_AWS_CONN — AWS_CONNECTION
- PROD_AWS_REGION — ap-southeast-1
- TF_BACKEND_S3_BUCKET — terraform-s3-state-random
- TF_BACKEND_S3_DYNAMODB_TABLE — terraform-state-lock
- TF_BACKEND_S3_REGION — ap-southeast-1
- TF_VERSION — 1.0.5
- VSTS_ACCOUNT — <fill in your account organization name>
Import The World repository
Go to Repos, select Import repository as below:-
Clone from https://github.com/jazztong/microservice-theworld and give the new repository name microservice-theworld.
If you browse the file azdevops/vars.tf, it already filled in the default value for the sake of running this demo. You are allowed to change to your own project name and another import URL.
Configure Project Pipeline permission
The project cross reference repository within the project and we need to auto-create the Environment for recording deployment history, enable pipeline in Project Setting > Pipelines >Settings > General as below:-
Configure The World pipeline
We will configure the pipeline so that any changes in The World repository can automatically deploy changes to our project. Go to Pipelines > Create Pipeline > Choose Azure Repo Git > Select The World Repo > Select the existing pipeline file as below, click save and run pipeline:-
For the first-time run, you require to grant permission to the environment and variable group resources.
The following resources will be created after a successful run for the pipeline:-
- 1 agent pool
- 2 repositories imported
- 9 pipelines
Provision environment
In this step, we will trigger pipelines one by one to build up all the environments. We will trigger either Continues Integration pipeline or Continues Deployment pipeline to build artifacts or deploy changes to environments. You should follow the pipeline trigger in order as there are dependencies between them.
Base Continues Deployment Pipeline
Base CD refers to Base infrastructure Continues Deployment, it manages all the configuration for Base infrastructure using Terraform such as:-
- ECS Cluster
- Application Load Balancer
- Roles and Policies
- Security Groups
After the pipeline is complete, you can verify the result in your AWS account as below:-
Database Continues Deployment Pipeline
The database is one of the dependencies for the service to start, hence we need to provision it first before the service starts. Run the Database CD pipeline in the Infra folder as above. After a successful run, you can verify the new RDS created in your AWS account.
Azure Agent Continues Deployment pipeline
Azure Agent CD is the pipeline that provision Azure Agent ECS Service. It will use as a proxy to allow Azure DevOps to deploy database changes through the pipeline. After the pipeline run is successful, you can verify it provisions new ECS Service in the ECS cluster as below:-
The ECS Service will be run as one of the agents in the agent pool we created previously. You can check the agent is running in organization settings > Pipelines > Agent pools > prod-demo-az-agent
Product-DB and User-DB Continue Deployment pipeline
We have 2 service base database deployment pipelines, which will deploy database changes when there is a new schema update. Run Product-DB CD and User-DB CD to create the service database schema. When you observe the pipeline detail, it shows migration has successfully been applied to the database.
Because it uses the hosted agent that we deploy in the ECS cluster, it does not require public access for your database to run the database migration.
Product-App, User-App Continues Integration Pipeline
In this step, we will trigger Product-App and User-App CI to generate docker images and publish them to ECR.
After the pipeline run successfully it will push the docker image to the ECR repository that you configure in AWS Service Connection.
You will need to grant the variable group permission for the pipeline
Login to your AWS account that you use to generate the credential, and open the ECR page, as the project configures to the ap-southeast-1 region, you may use this link to open the page.
The CI repository also configures automated Pull Request generation to submit deployment changes to the Infra repository, Pull Request workflow is one of the GitOps practices, it ensures all the changes are recorded in Git history, and follows the merging process even for Infrastructure changes.
Go to Infra pull request page, you should see a new pull request generated with a new docker image URI, approve the pull request and complete the changes so that the service deployment will happen.
The pull request was generated from the PowerShell script embedded in the CI script to call Azure DevOps API. It extracts the Docker URL and submits Pull Request to the CD of the service, which is inside the Infrastructure repo. Click Approve the Pull Request and Complete it, and you should aware the Continues Deployment pipeline is running.
Product Service and User Service Continues Deployment Pipeline
As continue from the previous Pull Request completion for User service and Product Service, the Continues Deployment pipeline will run as above. After the pipeline is finished, log in to your AWS account, and you should notice 2 new services created.
Testing your service
Now your infrastructure is ready, you can perform the testing below:-
- Add new product API
2. List product API
3. Add user API
4. List user API
Replace YOUR_ALB_CNAME with the actual CName from your ABL
Clean Up
As usual, you can perform a clean-up when you are happy with the result. All the pipelines come with a hidden destroy step, select the pipeline you want to be run, and add the additional variables as below.
Name= DESTROY, Value= True
As there are dependencies between the components, to properly clean up, you need to run destroy step by following the following order:-
- Product Service CD
- User Service CD
- AZ Agent CD
- Database CD
- Base CD
- microservice-theworld
Please be careful when destroy microservice-theworld pipeline, it will remove all the import repositories as well
Take Away
I know, I know, we did not go through all the detailed setup. As mentioned in the beginning, this is to share a high-level workflow and set up for a production-grade DevOps code base, you are required to study my code if you like to learn how it works in detail. A scalable and maintainable Pipeline project requires a lot of design and testing, especially when you move to a new DevOps Platform. Although all the major DevOps platforms support pipelines as code setup, theoretically the same concept and idea can apply in any DevOps platform. Drop me a note if you like me to explain more detail in specific concepts and steps. I hope you enjoy this project.