3 Tier Web Architecture in AWS [Part 3]

SATYAM SAREEN
5 min readJan 22, 2023

--

This is a continuation of the second part of the “3 Tier Web Architecture in AWS” series.

Till now we have seen how to create VPCs, Subnets, Endpoints, S3 Buckets, RDS, Redis, Loadbalancer, etc. In this part, we’ll put the rest of the parts in place and complete this puzzle!

6.) Instance Module

Create a folder instance_modulein the root of your working directory and add the below files one by one.

input_variables.tf declares the necessary input variables that will be used by the different resources in the instance_module.

This is a new one. data_source.tfcreates an aws_ami type data source which fetches the AMI Id of Amazon Linux 2 image from AWS with a bunch of other filters on its architecture, virtualization type, etc.

bastion_host.tf as the name suggests creates ec2 instances in the 2 public subnets of the wholesale_vpc. A bastion host is a server whose purpose is to provide access to a private network from an external network, such as the Internet. Notice the count meta-argument, it accepts a whole number which in our case is the length of the wholesale_bastion_host_names variable, and creates that many instances of the aws_instance resource.

7.) Autoscaling Module

Create a folder autoscaling_modulein the root of your working directory and add the below files one by one.

input_variables.tf declares the necessary input variables that will be used by the different resources in the autoscaling_module.

data_source.tfjust like we had created in the instance_module.

userdata.sh includes a shell script that configures a new ec2 when launched by the autoscaling group during startup. It includes the Apache web server, MySQL client, and some sample index.html files to display when requests from the ALB reach the backend application servers.

launch_template.tf defines the configuration of the ec2 instances that will be created by the autoscaling group including base image, credit specification, SSH key name, tags, instance type, etc. Replace the SSH key pair name with your key pair name.

autoscaling_group.tf creates an autoscaling group that populates the target groups with ec2 instances defined earlier in the loadbalancer_module. The health check type is ‘ELB’, which means that ASG will try to replace an unhealthy target with a new ec2 instance and terminate the existing one. Ec2 instances with an older launch configuration will be terminated first, during a scale-down event.

8.) IAM Module

Create a folder iam_modulein the root of your working directory and add the below files one by one.

iam_role.tf creates an IAM role and attaches the “AmazonSSMManagedInstanceCore” policy to it. We then create an instance profile that contains this IAM role. This instance profile will then be attached to the bastion host we defined earlier, in the instance_module and we should be able to connect to the bastion host using session manager.

output.tf creates an output for the instance profile name, this value can be consumed by the instance_module while attaching the instance profile to the bastion host.

8.) Route53 Module

Create a folder route53_modulein the root of your working directory and add the below files one by one.

input_variables.tf declares the necessary input variables that will be used by the different resources in the route53_module.

hosted_zone.tf creates an empty public-hosted zone for the domain name “three-tier-demo.com”. To actually use a domain you have to first purchase and register it with a DNS provider like Route53 or GoDaddy which is currently out of the scope of this blog. Hosted zones are basically a container for records that contain information about how you want to route traffic from your domains and subdomains to different AWS and non-AWS resources.

record_set.tf creates a CNAME record that points to the ALB DNS with a simple routing policy and a TTL of 30 seconds. The resultant URL will look like “statistics.three-tier-demo.com” CNAME record unlike Alias records can redirect DNS queries to any DNS record. For example, you can create a CNAME record that redirects queries from acme.example.com to alpha.example.com or to beta .example.org. You don’t need to use Route 53 as the DNS service for the domain that you’re redirecting queries to.

This completes all the modules that we are going to create in our 3-tier web architecture.

Now let's a look at how we are putting them all together.

At the root of the working directory create the below files as follows:

input_variables.tf declares variables that will then be supplied to the modules we created above.

terraform.tfvars will be used to set values to the variables declared above. Remember, you can set the value of the same variable at different places, and terraform will follow the below order of precedence:

  • Environment variables
  • The terraform.tfvars file, if present.
  • The terraform.tfvars.json file, if present.
  • Any *.auto.tfvars or *.auto.tfvars.json files, processed in lexical order of their filenames.
  • Any -var and -var-file options on the command line

Below file will be picked automatically by terraform while running a terraform plan/apply/destroy

main.tf puts all the modules together we defined before. We have given a source mets-arguement to all the modules which is a path to a local directory containing the module’s configuration files. To share data between modules, we are exporting data from the source module via outputs and injecting them into the destination modules as input variables.

outputs.tf creates output for the root module from its child module’s output. To be able to print values in the CLI while running terraform apply or to store these values as outputs in the statefile which can then be used in other modules using the terraform_remote_state data source, we need to pass these outputs from the child all the way down to the root module as we have done below.

We are finally done defining all the terraform configuration files. If you have made it this far, give yourself a pat on the back 😀

Create your resources. First, observe what all resources will be created in the plan stage. If satisfied, run an apply with the --auto-approve flag which skips the interactive approval of the plan before applying

terraform plan
terraform apply --auto-approve

Voila! now you should have your own running 3-tier web architecture on AWS. To test it out hit the ALB DNS URL as printed in the cli under the output three_tier_web_arch_alb_dns_name. Note that we won’t be able to use the route53 record created as part of our architecture, it was just for demo purposes. We’ll probably get a “DNS_PROBE_FINISHED_NXDOMAIN” error. This is because we haven’t actually registered and purchased that domain with DNS provider like Route53 or GoDaddy.

Don’t forget to destroy your resources once you are done testing to avoid any extra costs. It will prompt you for a confirmation, enter “yes” if satisfied with destroy mode plan.

terraform destroy

This marks the end of our “3 Tier Web Architecture in AWS” series.
If any questions/suggestions please add them in the comments. If you learned anything new from this blog series, please consider giving it a clap 👏, it keeps me to keep motivated to write more AWS content 😀

--

--