3 Tier Web Architecture in AWS [Part 2]

SATYAM SAREEN
4 min readJan 20, 2023

--

This is a continuation of the first part of the “3 Tier Web Architecture in AWS” series.

Till now we have seen how we’ll set up the networking of the above architecture but now let’s deep dive into the other aspects, starting with storage.

2.) S3 Module

S3 is a popular object storage solution offered by AWS, which stores your data as objects. It has a flat hierarchy but S3 will try to trick you into believing that it has a file system-like hierarchy by introducing the concept of folders but these folders are nothing but 0KB objects with their names ending with a forward slash (“/”).

Create a folder s3_modulein the root of your working directory and add the below files one by one.

input_variables.tf declares the necessary input variables that will be used by the different resources in the s3_module.

bucket.tf creates a s3 bucket for storing ELB access logs. SSE-S3 encryption is enable on the bucket. We have given putObject access to Elastic Load Balancing account ID and read access via the s3 vpc endpoint.

output.tf creates the necessary outputs which can then be consumed by other modules in our architecture

3.) Database Module

Create a folder database_modulein the root of your working directory and add the below files one by one.

input_variables.tf declares the necessary input variables for our database_module.

subnet_group.tf defines in which subnets our rds aurora nodes will be launched. Here we are choosing private subnets.

database.tf creates a 2-node Rds Aurora Postgres cluster (1 writer and 1 reader). Notice how we have added a depends_on in the postgres_db_reader_instances resource to make sure that it gets created after the postgres_db_writer_instances resource. This will cause the postgres_db_reader_instances to become the reader instance because the first instance to be created in a cluster will always be the writer.

4.) Cache Module

Create a folder cache_modulein the root of your working directory and add the below files one by one.

input_variables.tf declares the necessary input variables for our cache_module.

subnet_group.tf defines in which subnets our Redis cache nodes will be launched. Here we are choosing private subnets.

redis.tf creates a Redis cache cluster. In the below configuration, we have enabled cluster mode which means we can have up to 500 node groups/shards thus distributing our data over a large number of endpoints and giving better performance during peak workloads. But for the sake of simplicity, we have kept a single node group with 1 replica which equals to a total of 2 elasticache nodes.

We have enabled multi-az which means that instead of recreating and re-provisioning a new primary node during a failure event, elasticache will promote a read-replica with the least replication lag as the new primary which takes considerably less time than the former approach and within a few seconds you should be able to write to your cluster again, thus making our architecture highly available.

5.) LoadBalancer Module

Create a folder loadbalancer_modulein the root of your working directory and add the below files one by one.

input_variables.tf declares the necessary input variables for our loadbalancer_module.

loadbalancer.tf creates an application loadbalancer and it sends its access logs to the s3 bucket we created above under the prefix as configured by the wholesale_lb_access_log_bucket_prefix variable.

listener.tf creates a listener for our ALB which listens on port 80 HTTP. We have also defined a default_action of type fixed-response which is returned if none of the listener rules match.

listener_rule.tf creates multiple listener rules for our ALB. We have created 2 forwarding rules which listen to “/sales/” and “/prices/” path patterns and forward traffic to their respective target groups. We have 2 more rules which redirect traffic to the previous two listener rules, for any typo or case mismatch in the words “sales” and “prices”.

target_group.tf creates 2 target groups for the above 2 forwarding listener rules respectively. We have defined health checks at path “/health/” with an interval of 15 seconds. These target groups forward traffic to the backend instances at port 80.

output.tf creates the necessary outputs which can then be consumed by other modules in our architecture

This completes our s3, database, cache, and loadbalancer modules :)

We have more exciting modules to come in the third part of the “3 Tier Web Architecture in AWS” series.

For any questions/suggestions please add your comments and if you liked the content, please give this blog a clap 👏.

--

--