Configuration and Deployment of a SaaS Application on AWS EC2 Inventory across regions with Terraform, Ansible and AWS CodeCommit

Aggrey O
7 min readJan 10, 2024

--

I recently worked on a project based on real world scenario in which I designed and deployed a reusable, multi-tenant SaaS infrastructure on AWS using Terraform modules on AWS and automated the configuration management using Ansible and securely stored the application and configurations files on AWS CodeCommit.

The project solution architecture

The project main goal was to act as a DevOps engineer and create a seamless, efficient and cost-effective method for deploying and managing SAAS applications on AWS, as well implementing bests practices to achieve desired outcomes. The desired outcomes were to :

  • Create reusable, scalable and secure SaaS multi-tenant infrastructure on AWS and configure the application to unique regulatory requirements for each state in the union.
  • Leverage the power of Terraform modules
  • Leverage AWS CodeCommit to securely store project files
  • Leverage Ansible as tool of choice for Configuration Management Automation

I divided the project execution into 4 parts :

  • Deploy reusable SaaS Multi-tenant AWS Infrastructure using Terraform modules
  • Automate the SaaS application configuration management using Ansible
  • Securely store the Terraform and Ansible Configuration files on AWS CodeCommit
  • Modernization of the application architecture to make it scalable and highly available

Infrastructure as Code ( IAC ) : Is managing and provisioning of infrastructure automatically through code instead of through manual processes to make it easier to modify and distribute configurations. Manual process of provisioning infrastructure on the cloud or on-premises is expensive, hard to scale up/down, prone to human error, inconsistent and slow to deploy.

Why use IAC such as Terraform and Ansible?

  • To speed up the deployment process through automation
  • To enable safe and consistent configurations and deployment which limits human error
  • To reuse deployment code multiple times so that you can deploy additional resources if needed
  • IAC versioning history can help with roll back when needed to revert infrastructure to a previous stable state

Project Execution Part 1 : Deploying Saas AWS Infrastructure using Terraform Modules

Terraform was my IAC tool of choice for provisioning cloud infrastructure. Why Terraform?

  • Terraform is cloud agnostic
  • Declarative language
  • Support creating immutable infrastructure
  • It is agentless and masterless
  • Json format
  • Open source

Using Terraform modules, I created configuration files to provision Amazon EC2, Amazon DynamoDB table, and Amazon S3. The Terraform configuration files were then stored securely in AWS CodeCommit.

Services and Technologies used to execute first part of the project
The SaaS application directory structure
Terraform modules project directory structure
Terraform modules/main.tf, code to provision EC2 instance that will host the SaaS Application
modules/main.tf, code to create a DynamoDB table
modules/main.tf, IAM role to create full S3 and DynamoDB table access. Fine-grained access is recommended for production applications
modules/main.tf, code for provisioning S3 Bucket for uploading user images
modules/main.tf, code for generating random S3 bucket name
modules/main.tf, code to attach S3 and DynamoDB policies to the IAM role s3_dynamodb_full_access_role and code to create IAM instance profile
modules/main.tf , code to create AWS security group allowing public traffic via different ports. For production application use known private IPs
modules/outputs.tf, code will print information such EC2 instance public dns, dynamoDB table name and S3 bucket name
root directory outputs.tf, print information of all infrastructure created by terraform for each state and group them by state name
main.tf, code will create resource for each state defined in variables.tf file
variable.tf, code contains a list of states that Terraform will loop over and provision resources for each state
variables.tf definition the variable state_name

Terraform State file and Terraform State Lock file

Every time you run Terraform via terraform plan or terraform apply, it records information about infrastructure it created in a Terraform state file. The next time, you run Terraform, it fetches the latest status of the resources you created and determine what changes need to be applied.

On a personal project, storing terraform.tfstate file locally on your PC is not a big problem. But for production application, terraform.tfstate need to be stored in a remote shared datasource, this is because of the following :

  • Each team members will need access to the same Terraform state files in-order to update your infrastructure
  • Shared Terraform state files will result in locking of the files when two team members are running Terraform at the same time, this helps avoid running into race conditions as multiple Terraform processes make concurrent updates to the state files, leading to conflicts, data loss, and state file corruption
  • Isolated state files helps reduce the chances of accidentally breaking production when making changes to testing or staging environment

The S3 bucket that store the Terraform State file is created manually via AWS CLI and not managed by Terraform. This is to prevent accidental deletion of the Terraform state file when destroying AWS infrastructure via terraform destroy command.

Using AWS CLI to create S3 bucket to store Terraform State file
S3 bucket created manually via AWS CLI
backend.tf code to store terraform state file in a remote backend i.e S3 Bucket

The Terraform backend will load the state file from S3 bucket every time we run terraform plan or terraform apply.

Terraform backend supports locking via DynamoDB table. DynamoDB supports strongly consistent reads and conditional writes, which are good recipes for distributed lock system.

The DynamoDB table that will store the Terraform State Lock file is not managed by Terraform, therefore, it is manually created to prevent us from accidentally destroying it via terraform destroy. The Terraform Lock file locks the configuration files while someone is running terraform plan or terraform apply.

Using AWS CLI to manually create DynamoDB table to store the Terraform State Lock file
Terraform State Lock file stored in DynamoDB table
terminal output of “terraform apply” command shows the resources provisioned by Terraform
aws console showing provisioned EC2 instances for each state

Project Execution Part 2 : Application configuration management automation with Ansible

Configuration management helps in making sure that the systems that host the application will perform as expected as changes are made over time.

Benefits of configuration management

  • automates administrative tasks resulting in quicker servers provisioning
  • helps track changes we are making over time to avoid expensive remediation
  • Ensure test and productions environments match so application run as expected

Ansible is free, powerful, simple to use, flexible and agentless.

Services and Technologies used to execute the second part of the project
The project Ansible directory structure
Ansible default arguments
Ansible handlers for restarting Nginx and for restarting the humangov SaaS application
Ansible tasks
Ansible template
Ansible service template content will be replaced dynamically by Ansible
ansible.cfg
Ansible Playbook
Running Ansible playbook
Ansible playbook tasks automation
The 3 provisioned SaaS application instances, California, Nevada and Florida are all accessible from the browser

Project Execution Part 3 : Store Application files, Terraform and Ansible configurations file on AWS CodeCommit

commited changes to CodeCommit
Terraform and Ansible configurations files stored in AWS CodeCommit

Projection Execution Part 4 : Modernization of the application architecture to make it more scalable and highly available.

The next part of the project will be to make the application infrastructure scalable and highly available.

Issues with current application architecture :

  • Lack of availability whenever the EC2 instance hosting the SaaS application goes down
  • EC2 Management Overhead : DevOps engineer have to patch and manage the EC2 instances themselves
  • Compatibility issues with developers test environments because of different libraries version compared to what is on production

Because of this issues, the next plan is to containerize the application architecture using fully managed container orchestration service such as AWS Elastic Container Service (ECS).

At a later date, I will be posting the part 4 of the implementation, which will be, using ECS to make the SaaS application scalable and highly available. So please, come back soon.

--

--

Aggrey O

Software Engineer with focus on building, testing and deploying distributed cloud native applications | https://www.linkedin.com/in/aggrey-o-46b8004/