In this section we will go through the installation of a 3 — Tier WordPress application in AWS and the corresponding security groups for the instances running on each tier.
The final solution will look like this:
Let’s go through each of the tiers:
- VPC and common configuration
- Tier 3: Restricted access — DB server
- Tier 2: Restricted access — Multiple WordPress servers
- Tier 1: Public access — Load balancer and Bastion host
First we provision the VPC and define some global resources that will be used by the subnets.
aws_vpc.app_vpc: This resource will be our main VPC and we just configured the CIDR range and enabled dns.
aws_internet_gateway.app_igw: Defines an internet gateway to be used by the public subnet and the NAT fo the private subnets.
aws_vpc_dhcp_options and aws_vpc_dhcp_options_association: Defines the DNS server for our VPC. We will use the amazon provided DNS for simplicity but you can add any other DNS ip here. Finally we associate the DNS configuration with our
Tier 3: DB tier
For this demo we will use RBS as persistence mechanism. This layer actually has two subnets as per RBS requirements. The network constraints are:
- Allow ingress on port 3306 from WP subnet.
- Allow ingress on port 3306 from Public subnet (this is for bastion host debugging).
- Allow egress to everywhere inside the VPC. (No internet access)
- Deny everything else.
db_subnet_1 and db_subnet_2: The two subnets in different availability zones required for RDS. The
availability_zone attribute is obtained from a
data resource defined in main.tf, go check it out.
aws_security_group.db: The security group for the db. The
ingress part defines only the port 3306 and the subnet CIDR as explained before. It also includes
var.gcp_wp_subnet which will be the GCP subnet that will contain the WordPress instances once migrated, as we will explain in the next tutorial.
Creating the RDS instance should be pretty straight forward:
aws_db_subnet_group is used to indicate the subnets in which we will deploy the DB. Note how we also added the security group we defined earlier to our
Tier 2: Multiple WordPress servers
For this demo we will create two WordPress instances that will use the database defined in the previous tier. We will create a single subnet for the demo purposes, however note that for a high availability installation is recommended to put each server in a different subnet.
The WordPress constraints are:
- Allow ssh (http:22) connections from the bastion host in public subnet.
- Allow port http:80 connections from the load balancer in public subnet.
- Instances should be able to reach internet via NAT.
Lets create the WordPress subnet:
aws_route_table.wp-subnet-routes: We have to modify the route table of the subnet to add the NAT gateway. This gateway is defined in the public subnet which is the one who as a route to the internet. We then have to use a
aws_route_table_association to link the route table to the subnet.
aws_security_group.wp: Open ingress from the public subnet on ports 22 and 80 for the bastion host and the load balancer respectively. Egress to everyone, even internet via NAT.
Now lets provision our WordPress instances:
aws_instance.wp: Our wp instance. The
ami attribute comes from a
data resource defined in main.tf, go check it out.
key_name Refers to the key we need to connect to the instance via ssh. That key is also created in main.tf.
tags attribute in
aws_instance.bastion will be used by Velostrata to select the instances that will be migrated. I'll explain this in the next tutorial.
count attribute indicates how many instances of the resource are we creating.
null_resource.wp_provisioner: Now, this resource is the one that actually provisions the WordPress software into the instance. There are several aspects worth considering here:
null_resourcedoesn't actually creates anything. We use this type of resources for things like this.
triggers: This basically indicates when this resource should be executed. In this case, any time any of the WordPress instance is recreated, we should execute this resource. The trick here is that we assume that if the
private_ipof an instance changes, it means that it was recreated.
provisioner file: Those are the scripts that will install the software. We have the WordPress server installer and also a Velostrata package that need to be installed for the migration step explained in the next tutorial. Check the scripts out in the scripts folder. This provisioner copies the files into the WordPress EC2 VMs.
provisioner remote-exec: Is indicating that terraform should connect to the instance and execute the steps indicated in the
inlinearray. Look how we can pass arguments to the script
connection: Indicates how terraform should connect to the instance. In this case, we specify that it should go through the bastion host, since the WordPress instances can't be accessed from the internet, remember that NAT allows egress only. Also in this block, we specify the
SECURITY NOTE: This is for demo purposes, it is a bad idea to store secrets in the terraform state
- Finally we indicate that this resource should be executed after the external ip of the bastion is assigned, since we need it to reach to the WordPress instances. Also we create two of this resources.
Tier 1: Public subnets
We need to create two subnets as per
aws_alb (Load Balancer) requirement. In here we will create the load balancer and the bastion host. The bastion host will allow us to reach the private instances without giving them access directly outside the VPC.
The network constraints are simple:
- Load balancer can be accessed from internet on TCP port 80.
- Bastion host can be accessed from internet on TCP port 22.
Creating the subnets is pretty straight forward.
aws_route_table.public-routes: The route table for the public subnets. It has a default route to the internet via an
aws_internet_gateway defined previously in the VPC section. Note that we define a
aws_route_table_association for both public subnets.
aws_nat_gateway.nat-gw: We define the NAT gateway and the
eip external IP here. This gateway will be used by the private subnets that want to reach internet. To create this we depend on the internet gateway and the dns resolver.
aws_security_group.bastion security group allows TCP port 22 from everywhere. This is for ssh connections from our laptop.
aws_security_group.alb allows TCP port 80 connections to the load balancer from everywhere. We want this because this is our entry point to the application.
The bastion host configuration should be familiar by now:
ami attribute comes from a
data resource defined in main.tf. We associate the
aws_security_group.bastion and create an external ip for the instance so that terraform can
ssh into it.
Now the load balancer:
aws_alb.alb: This is the application load balancer. We define the subnets for HA and apply the security group that allows ingres on TCP port 80 from the internet. aws_alb_target_group.targ: This is the target of the load balancer. It specifies the health check configuration and how it will keep the session between the different backend instances. In this case, we are saying that we will use a lb_cookie to identify who is handling the requests.aws_alb_target_group_attachment.attach_web: This is where we assgin the EC2 instances to the load balancer. The
count attribute indicates that we will create two resources and the
target_id specifies the id of the specific EC2 instance. We use the
count.index value to get the right instance out of the
aws_instance.wp.*.id array using the
element function. aws_alb_listener.list: This specifies the action the
alb should take. In this case we are saying that every request on port 80 should be forwarded to the target group defined earlier.
Thanks for reading, if you think this could help others, please share and don’t forget to 👏👏👏. 🖤